Miscellaneous Proofs and exercises, Part 1

I have encountered a large number of cute proofs and exercises, too cute in fact to be left unwritten down.  I will collect them in this blog post.  It will span a lot of different areas of physics, but a certain emphasis will be on elementary quantum physics.  Some of the exercises are elementary, others are quite nontrivial.

No degenerate Eigenstates in 1 dimension.

Virial Theorem.

We derived a Virial Theorem in the section Mechanical Similarity in Classical Mechanics.  Turns out there’s a QM analog for expectation values

\frac{d}{dt}<xp> = 2 <T> - <x dV/dx>

Proof:

\frac{d}{dt}<xp> = \frac{i}{\hbar}<[H, xp]> = \frac{i}{\hbar} <[H,x]p + x[H,p]>

Plug in [H,x]= - \frac{i \hbar p}{m} and [H,p]= x\frac{dV}{dx}

$latex  \frac{d}{dt}<xp>= 2<T>- <xdV/dx>$

For stationary states, the LHS is zero (expectation values are constant).  , which means

2<T>= <x \frac{dV}{dx}>

Coherent States for Harmonic Oscillator

Claim: Coherent states of the harmonic oscillator are eigenfunctions of the lowering operator. (there are no normalizable eigenfunctions of the raising operator)

Proof:

Start from answer:  suppose a_{-}|y> = y|y>

Calculate: <x^2>, <x>, <p^2>, <p>, by expressing:
x = \sqrt{\frac{\hbar}{2m\omega}}(a_-+ a_+)

p = -i \sqrt{\frac{m\omega \hbar}{2}}(a_+-a_-)

x^2 = \frac{\hbar}{2m \omega}(a_-^2 + a_+^2 + 2a_+a_- + 1)

p^2 = \frac{m\omega \hbar}{2}()

using a_- a_+ = a_+ a_- + [a_-, a_+]

WKB

Generalized Uncertainty Principle:

The uncertainty principle for the variance of 2 noncompatible observables is a direct consequence of the Cauchy Schwarz Inequality (CS). Recall CS states:

|(f,g)|^2 <= (f,f)(g,g)

and the equal applies when f is a multiple of g.  Furthermore, we can make it even more restrictive by considering the square of the magnitude of the imaginary part  of the LHS:

|\frac{1}{2i} ((f,g)-(g,f))|^2<(f,f)(g,g)

Consider 2 observables A, B, and a state \psi Then

\sigma_A^2= (\psi, (A- \bar{A})^2 \psi) = |A \psi|^2- \bar{A}^2

\sigma_B^2 = (\psi, (B- \bar{B})^2 \psi)= |B \psi|^2- \bar{B}^2

where \bar{A}= (\psi, A \psi) and similarly for \bar{B}.

Note if we call f = (A- \bar{A}) \psi, and g = (B- \bar{B}) \psi, we can plug them back into the CS equation.  On the LHS, we have

\sigma_A^2 \sigma_B^2

On the RHS, we have:
|\frac{1}{2i}(\psi, [A,B] \psi)|^2

from which we obtain:

\sigma_A \sigma_B >= |\frac{1}{2i}<[A,B]>|

We note that equality is achieved when f is a purely imaginary multiple of g.  So in the case of A = X, B = P, we can solve for the wavefunctions that hit the uncertainty limit (coherent states):

x(f) = iC p(f) = -C \frac{\partial^2 f}{\partial x^2} = xf

which yields that f is a gaussian.  No surprise there…

Momentum Operator in Quantum Mechanics

I know of 2.5 ways to get the form of the momentum operator in quantum mechanics.  I say 2.5 because it is actually 2 ways, with one of the ways having variant versions.

1st: Via generator of Translation

We view momentum as the generator of translations, the same way classical momentum is the part of the generating function of the classical canonical transformation for translations.

First let’s define a translation in QM:

T(dx) = I - i K \cdot dx

T(dx) |x' \rangle = |x'+ dx \rangle

where K is a hermitian operator.  Note that

T(dx) T(-dx) = I + O(dx^2)

and [x_i, K_j]= iI \delta_{ij}
Since the generator of infinitesimal translation (X = x + dx, P = p) in classical mechanics is :

F(x,X) = x \cdot P + dx \cdot p

where the first part is generator if identity, second generate translations, we suppose that K is related to the momentum operator in QM.  Problem is K has units of \frac{1}{[length]} = \frac{[momentum]}{\bar{h}}, so instead, we set:
K = \frac{p}{\bar{h}},

and T (dx')= 1 - \frac{ip\cdot dx}{\bar{h}}

from which we obtain the commutation relations:
[x_i, p_j] = i \bar{h} \delta_{ij}

We now have a good idea of what the operator looks like for infinitesimal translation.  How about finite translations? We just compound a lot of small translations:

T(\Delta x) | x' \rangle = lim_{N \rightarrow \infty} (1- i \frac{p_x dx'}{N \hbar})^N = \exp{-\frac{ipx}{\hbar}}

To get the p representation in x basis we apply a little trick:

(1- \frac{ipx}{\hbar})|\alpha \rangle =\int dx' T(dx') |x'\rangle \langle x'| \alpha \rangle = \int dx' |x' + dx' \rangle \langle x'| \alpha \rangle = \int dx'|x' \rangle \langle x' - dx'| \alpha \rangle

Now we can taylor expand the ket:

= [\int dx' |x' \rangle \langle x'| - dx' \frac{\partial \langle x'|}{\partial x}] |\alpha \rangle

\langle x' |p| \alpha \rangle = -i \hbar \partial_x \alpha(x)

Method 2: Nifty guess or powerful math trick

We use the fact that the commutator [x,p] = ihI, and the fact that [a,bc]= [a,b]c + b[a,c].

Then we compute

[x^n, p] = x^{n-1}[x,p] + x[x^{n-1},p] = i \hbar x^{n-1} + x[x^{n-1},p]

If we keep expanding, we note that we will obtain something like:

[x^n,  p] = n i \bar{h} x^{n-1}

Since any “nice” functions f(x) can be taylor expanded  into a polynomial in x, we can conclude that p must acts like a derivative operator on f(x), in fact that

$<x| p =- i\hbar \partial_x$

Posted in Physics, Quantum Physics, Statistical Physics | Comments Off on Miscellaneous Proofs and exercises, Part 1

Heisenberg Picture

In the schrodinger formulation of Quantum Mechanics, we vary the state vector while keeping operators time independent.  Another way to determine time evolution of observables is to fix the state vector, but rotate the operators.  The link between the two pictures is the Hamiltonian, which we’ll consider to be time-independent for now.

Since in the schrodinger picture:

|\psi (t)>= U(t, t_0)|\psi_0>

where if H is time independent,
U(t, t_0) = e^{\frac{-iHt}{\hbar}}

Note that for a given operator A, its expectation value evolves according to
<\psi_0|e^{\frac{iHt}{\hbar}} A e^{\frac{-iHt}{\hbar}} |\psi_0>.
Instead of evolving the state vector, we instead, define a time changing operator

A(t)= e^{\frac{iHt}{\hbar}} A e^{\frac{-iHt}{\hbar}}

where A is the schrodinger operator.  To complete dynamics of evolution, we must have a differential equation whose solution is show above:

\frac{dA}{dt}= \frac{iH}{\hbar} e^{\frac{iHt}{\hbar}}A e^{\frac{-iHt}{\hbar}}- e^{\frac{iHt}{\hbar}}A e^{\frac{-iHt}{\hbar}} \frac{iH}{\hbar}

Since A may not commute with H,

\frac{dA}{dt}= \frac{i[H,A]}{\hbar}

which is the heisenberg equations of motion.

We should immediate note it looks exactly the same for a classical operator A(q,p):

\frac{dA}{dt}= \{ H,A \}

where curly braces stand for the poisson bracket.

Note if we have H(t) we only need to make small modification, with the analogy to classical mechanics in mind:

\frac{dA}{dt}= \frac{i[H(t), A(t)]}{\hbar}+ \frac{\partial A}{\partial t}

Interaction Picture

Often in problems, it is convenient to change frames, in classical terms.  In QM, there is a similar technique.  Account for the frame changing by an “extra” Hamiltonian that is well known.   Plug that hamiltonian into your Heisenberg  Picture, then use schrodinger picture in this new frame where the residual hamiltonian is not as cute.

Posted in Physics, Quantum Physics | 1 Comment

Maps between Manifolds, Lie Derivative and Killing Fields II

So far we have seen the following:

1.  An arbitrary map \phi between 2 manifolds also uniquely determines how to map vectors between the tangent spaces at corresponding points on the 2 manifolds.

2.  When the map is diffeomorphic, we can identify the map with a coordinate transformation (passive vs. active view).  On one hand, we can consider the map to induce a mapping between tangent spaces (active view). On the other hand, we can conclude it merely induces a change of component of the unchanging geometrical objects (passive view).

3. Given a vector field, it generates a 1 parameter family of maps.  Those maps allows us to compare tangent spaces of the manifold at different points.  Hence, it allows us to define the Lie Derivative.

4.  By choosing carefully the coordinates, the Lie Derivative can be expressed as:

L_V w^a = [v, w]^a = v^b \nabla_b w^a-w^b \nabla_b v^a

for ANY derivative operator \nabla

Lie Derivative on Higher rank objects

To obtain a coordinate expression for the Lie Derivative on covariant vectors, we use Leibnitz rule and the consistency with contraction.

L_V (U_a W^a) = v(U_a W^a) = W^a L_V(U_a) + U_a [v, w]^a

v(U_a W^a)= v^b \nabla_b (U_a W^a) = v^b U_a \nabla_b W^a + v^b W^a \nabla_b V^a

W_a L_V(U_a) + U_a[v,w]^a = w^a L_V (U_a) + U_a v^b \nabla_b w^a - U_a w^b \nabla_b w^a

Setting the two sides equal leads to

L_V(U_a) = v^b \nabla_b U^a + U_b \nabla_a V^b

For higher rank tensors, we can use contraction with covariant and contravariant vectors to get the expansion in terms of derivative operators:

L_v T^{a_1..a_k}_{b_1..b_l} = v^c \nabla_c (T^{a_1..a_k}_{b_1..b_l}) - \sum_{i=1}^k T^{a_1..c..a_k}_{b_1..b_l} \nabla_c v^{a_i} + \sum_{j = 1}^l T^{a_1..a_k}_{b_1..d..b_l} \nabla_{b_j} v^{d}

Killing Vector Fields

We mentioned briefly that isometries are diffeomorphisms that are also a symmetries of the metric, hence does not change the metric under a pullback defined by the map.

Our question now is, how are those isomorphisms generated?

It turns that a necessary and sufficient condition for a family \phi_t to be an isometry is that it is generated by a vector field T, which satisfies:

L_T g = 0  (this equation is a literal translation of the requirement that T generates a map that is isometric)

or in a coordinate form:

\nabla_a T_b + \nabla_b T_a = 0

Such a vector field T is a Killing field.  The formulation above seems abstract.  However there are simple low dimensional examples.  For example, consider the 2 sphere. An example of a killing field is the field that points along lines of latitude.

Posted in General Relativity, Geometry, Math, Physics | Leave a comment

Maps between Manifolds, Lie Derivatives and Killing fields I

The following discussion is a informal synthesis on some useful techniques used in General Relativity involving maps between manifolds.

Setting up the manifolds

Suppose we have a differentiable map \phi : M \rightarrow N, which maps a point p of M to \phi(p) \in N. M and N are 2 different manifolds (the map is in general not invertible).  We put coordinates x^{\alpha} on M, y^{\beta} on N for convenience.
Pullback of a function

If f is a function f: N \rightarrow R, we can also define a function (\phi^* f): M \rightarrow R where (\phi * f)= (f(\phi))\phi^* is called the pullback of f induced by the map phi, because it “pulls” back the function f from the domain N to the domain M.  Note however, that if g is a function M \rightarrow R, there is no obvious “pushforward” of g on N, because \phi might not be invertible.

Pullback of a Contravariant Vector

Suppose we have a tangent space V at p \in M, where v is a vector in that tangent space. Then, the map \phi induces the pushforward map \phi^*: V(p) \rightarrow V(\phi(p)) (which maps v to \phi^*v) by requiring the following

(\phi_* v)f = v(\phi^* f).

Using coordinates, we can resolve the components of the pushforward of v:
(\phi_*  v)^{\alpha} \frac{\partial f}{\partial y^{\alpha}}=  v^{\mu}\frac{\partial f}{\partial y^{\beta}} \frac{\partial y^{\beta}}{\partial x^{\mu}}

We note that the pushforward is map does nothing more than resolve the vector v in a new coordinate system;  the only minor difference now is that the new coordinate system in N might not be isomorphic to the one in M. Also, the noninvertibility of \phi implies that you can pushforward vectors, but can’t pull them back.
Pullback of dual vectors (or 1 forms)

We can easily now define the pull back of a 1-form w defined at \phi(p) as follows:

(\phi_* w)_a v^a = w_b (\phi^*  v)^b

This definition just means we want the pullback of a 1 form to be consistent with the pushforward definition.

Finally we can pushforward and pullback type (0,k) or (l, 0) tensors.

Pullback and Pushforward when the map is invertible

When \phi is a diffeomorphism, we can both pullback AND pushforward vectors (or dual vectors) since we can consider the pushforward of \phi as the pullback of [\phi^{-1}]

But since M and N are related by a diffeomorphism, it means they are essentially the same manifold. Adopting a passive view of the transformation, \phi therefore induces a change of coordinate on the same manifold M, causing the components of the tensors to be different.

For physicist, the pushforward operation therefore reduces to a tensor transformation law under a change of coordinate:

(\phi_* T)^{a_1...a_k}_{b_1...b_j}= \frac{\partial y^{a_1}}{\partial x^{u_1}}...\frac{\partial x^{v_1}}{\partial y_{b_1}}...T^{u_1..u_k}_{v_1..v_j}

Suppose T is a tensor field on M.  If $\phi$ maps T to T,

(\phi^* T) = T

then \phi is a symmetry transformation of T. In this context, we can define an isometry as a diffeomorphism that leaves the metric invariant (hence distances are left the same by the map).  Given a diffeomorphism \phi: M \rightarrow M, we can consider the difference between the tensor field T and its pushforward (\phi^*T).

Suppose we have a one parameter group of diffeomorphism \phi_t generated by a vector field $v$ (we will elaborate later on this), in other words we have a group of diffeomorphisms that are continuously parametrized by the parameter t. Suppose we have a tensor field T defined on M, where p is a point on the manifold.  We defined

\Delta T = \phi_*t(T(\phi_t (p)))- T(p)

Intuitively, it means we let the tensor field flow to t, then compare its value at t with its value initially by pulling back the different tensor in a different tangent space.

Then, the lie derivative with respect to v is:

L_v = \lim_{t \rightarrow 0} \frac{\Delta T}{t}

How to generate a 1 parameter family of diffeomorphisms

We know how to obtain a lie derivative given a parametrized family of diffeomorphisms \phi_t, and claimed it was generated by the vector field v.  Here’s how.

Given the family \phi_t, pick a point p at t = 0.  As you let t evolve, \phi_t(p) traces out a curve on the manifold (unless it is a fixed point).  We define the vector field that generates the family \phi_t by the collection of tangent vectors to those integral curves x^{\mu}(t) traced out by each point on the manifold.

V^{\mu} = \frac{d x^{\mu}}{dt}

Remark: for consistency purposes the integral curves cannot cross each other, which implies there is some restriction on the family \phi_t

Note, given a vector field V^u(x), we can always generate the family \phi_t, a well known fact from differential equations.

The lie derivative is a more primitive object than the covariant derivative, because it does not require a connection (hence no metric).  It does require a vector field.  On ordinary scalar field, the lie derivative reduces to directional derivatives:
L_v f = V^u \partial_u f

Coordinate dependent treatment:

Suppose we want to evaluate the lie derivative in a certain coordinate, x1…x_n. Let x_1 = t, and the other coordinates be arbitrary (intuitively, it means that we let the curves of constant x_1 be the integral curves of V). In this coordinate system, the lie derivative reduces to a partial derivative in x_1:

L_V T= \frac{\partial}{\partial x_{1}} T

For a vector field U,

L_V U = \frac{\partial U}{\partial x_1}

The commutator

[V, U]^u = V^a \partial_a U^u - U^a \partial_a V^u in this coordinate system reduces to :

\frac{\partial U^u}{\partial x_1}

Since both the lie derivative and the commutator of 2 fields are covariant quantities, and they match in this frame, they must be the same in every frame.

Posted in Geometry, Math | Leave a comment

Differential Forms I

The following rant will be a physics student’s attempt to make sense out of differential forms: hence I will make no apology for my unrigorous explanations.  A great source to read about differential forms and their uses in physics is John Baez book “Gauge fields, Knots and Gravity.”  Sean Carroll’s “Spacetime and Geometry” does introduce differential forms in a very heuristic way, but the unsystematic development was very confusing for me. Wald’s “General Relativity” also has some discussions of differential forms, but he is extremely terse in both mathematical explanations and notation wise.  While reading it, I felt more like I was fighting against his notation rather than being invited into a interesting topic of mathematical physics.

Preliminaries

A p-form is a completely antisymmetric tensor (multilinear map if you will).  This means that switch any 2 indices of a tensor component yields another tensor component with the negative of the original value. Let us give some definitions first:

The completely antisymmetric sum of a tensor T_{\mu_1 \mu_2..\mu_n} is

T_{[\mu_1 \mu_2... \mu_n]}= \frac{1}{n!} \sum_{perms} (-1)^{\sigma} T_{\mu_1 \mu_2...\mu_n}

where \sigma is defined as the sign of the permutation.

We can similarly define a completely symmetric sum, as a sum over all permutations (we therefore omit the (-1)^{\sigma} from the previous definition)

Forms

Let’s define the wedge product of 2 forms (say w is a p form and v is a q form), as a mapping from (w,v) to a (p+q) form.

(w \wedge v)_{a_1 a_2..a_{(p+q)}}= \frac{(p+q)!}{p!q!}w_{[a_1...a_p}v_{a_{p+1}...a_{(p+q)}]}

Note that the dimensionality of the space of p forms on M is  n\choose p, hence is the same as a dimesionality of the space of n-p form (this leads directly to Hodge duality).  For example, \alpha \epsilon_{ijk} defines the set of 3 forms in R^3. The parameter \alpha is the only free parameter, hence the dimensionality of those forms is 1.

Differential Forms

Suppose we have a form field defined on a manifold M of dimension n. Let’s do some calculus on it!  An obvious derivative operator comes up.  Of course, we have to make sure the derivative operator (denoted d) maps p forms to (p+1) forms.  The obvious choice is do define the map as:
(dw)_{a_1...a_{p+1}}= (p+1)\partial_{[a_1} w_{a_2...a_{p+1}]}

Because of the complete antisymmetry, there is no preferred derivative operator for this operation (all the connection term cancel out).  Hence, no metrics are necessary for forms!  Furthermore, it is easy to see that:

d^2 w = 0

for any forms (field).  What is less obvious is that if a p form w satisfies dw = 0, we can always LOCALLY find a p-1 form v such that w = dv.  We say w is closed if dw = 0.  If we find a v globally such that dv = w, then w is called exact (a generalization of exact differentials).   So a closed form is not necessarily exact (but an exact form is always closed).  A nontrivial fact is that on a manifold M, the dimensionality of the space of closed forms modulo exact forms is a topological invariant (the Betti number).  We call the space of closed p form modulo exact form the deRham cohomology class of M (denoted Hp(M)).

H_p(M) = \frac{C_p(M)}{E_p(M)}

So 2 closed forms different by an exact form are the same element in the cohomology class. The fact that we can extract topological information from a manifold that way is remarkable.  From example, R^4 is topologically equivalent to minkowski space, since all closed forms are exact in R^4, so are they on Minkowski space.

Hodge duality

We have previously remarked that the dimensionality of n-p form fields and p form fields are the same, which leads us to do the following: create a duality map of p forms to n-p forms.  Unlike differentiation, the hodge duality map needs a metric structure.

For a p form w, its Hodge dual denoted by *w:

*w_{a_1...a_{n-p}}= \epsilon^{b_1...b_p}_{a_1...a_{n-p}}A_{b_1...b_p}

You can convince yourself that the cross product of 2 vectors in R^3 is just the following hodge dual:

*(A \wedge B) = A \times B

Electrodynamics in Differential Form notation

One area of application of forms is electrodynamics, where we can define Electromagnetic Strength tensor F as F = dA.  In that notation, maxwell’s equations are:
dF = 0

d (*F)= *J

where J is the current.  In vacuum, the maxwell’s equations are invariant under duality transformation.  By that, I mean replace F to *F, *F to F and you have the same set of equations.  Furthermore, gauge invariance is simply expressed as invariance of Maxwell equations under the following transformation:
A' = A + df where f is a scalar

In a world where magnetic monopoles exist, we cannot use a vector potential anymore, and we can have the following maxwell’s equations instead.

d(F)= J_M

d(*F)= *J_M.

The symmetry is then complete.

Quick aside: Dirac calculated that the elementary charge of a magnetic monopole is inversely proportional to the elementary charge of an electric monopole. Electrodynamic is a weakly coupled theory.  But by taking the hodge dual of the electrodynamic equation we can get the magnetic dual, strongly-coupled theory. It is much harder to solve because perturbation theory is not applicable anymore.  Hence, it is sometimes convenient to consider weakly coupled theories dual to strongly coupled ones…
Technical detail: while a form is a completely antisymmetric linear map from R^n to R, a differential form is a form defined on the Tangent space of a manifold.  It can be shown that given a coordinate, a differential form df can be written as:

df = g(x_1...x_n) dx_1 \wedge dx_2 \wedge...\wedge dx_n

Notational caveat:

We lazy, we will often abbreviate df by omitting the wedge product.

df = g_{x_1, x_2...x_n} dx_1 dx_2 dx_3....dx_n

Posted in Geometry, Math | Leave a comment

Maxwell Relations.

At first sight, maxwell relations can seem a little bit intimidating (it was for me), but the concept behind should be simple from the following point of view.

We know there are several state variables we can play with in Thermo: among them E, S …  Let J denote intensive variables, and x denote extensive variables.  Then {\bf J} \cdot {\bf x} is a work term.

If we calculate the differential of a state variable, it’s an exact differential.   Which is equivalent to saying it defines a curl free vector field.  Which is equivalent to saying its mixed derivatives are equal.  Those mixed derivatives equalities ARE the maxwell relations.
Let’s pick E as our state variable.  Then:

dE = TdS + \sum_i J_i dx_i

\rightarrow \frac{\partial J_i}{\partial S}|_{x_i} = \frac{\partial T}{\partial x_i}

Consider the state variable now as F = E-TS (it is a state variable because it is a function of state variables only).

dF = -SdT + \sum_i J_i dx_i

\rightarrow \frac{\partial S}{\partial x_i}|_{T}= -\frac{\partial J_i}{ \partial T}
Why don’t we pick another state function G = E-TS-Jx for the hell of it (actually, we carefully pick our state function to exactly produce the intensive conjugate variable, since those are less obvious to extract).

Then,

dG = -SdT - xdJ

\rightarrow \frac{\partial S}{\partial J_i} = \frac{\partial x_i}{\partial T}

Posted in Math, Physics | 1 Comment

Tropical Mathematics and the Classical Limit to Quantum Mechanics

We often say that Classical Mechanics is an approximate theory for quantum mechanics, in the limit of large quantum numbers.  Such statements can lead to some insight, but can also be confusing.  Where is the approximation done?  What term should we ignore?  The fundamental problem is that Quantum Mechanics, as formulated by the propagator, seems to be a linear theory.  The amplitude that a state propagates from to another one is calculated via a linear operator, the propagator e^{-iHt}.  But that amplitude does not give you the probability, its square does.  And the Hamiltonian, the classical analog of energy, is exponentiated, leaving us somewhat confused about the connection of that exponentiated operator to the Energy scalar in classical Mechanics.

A definitely nicer way to understand the connection to Classical Mechanics is via the path integral formulation.  In Quantum Mechanics, we say that that the Amplitude of propagation from one state to the next is proportional to a path integral over all possible path.  Each path contributes equally in the integral!

In other words, the quantum amplitude is written as:

\int D(X) e^{\frac{i}{\bar{h}}S(X)}

Where D(X) denotes the measure on the path.  The connection to classical mechanics is then manifest. Suppose \bar{h} \rightarrow 0.  In that limit, if the classical action term S(X) is not stationary along the given path, varying a little bit the path will cause the phase to switch and exactly cancel our initial path contribution. Therefore, the paths that contribute the most are the ones who have stationary actions.  This approximation,(defined as the saddle point method of method of steepest descent) is a common trick used to evaluate integrals of the form:

\int e^{-f(x)}dx = \int e^{f(x_0)+ \frac{1}{2} f''(x_0)x^2+...} dx

This integral turns out to be a gaussian integral if we keep only the second order term.
In the same spirit, we evaluate the path integral by finding the stationary path (instead of just finding x_0). Require \delta S = 0 reproduces the well know Euler Lagrange equation of Classical Mechanics:
\frac{d}{dt} \frac{\partial L}{\partial \dot{q}} = \frac{\partial L}{\partial q}

In the spirit of physics though, we will try to understand this simple concept in a different manner, using idempotent analysis and tropical mathematics.
First some terminology:

A ring is a abelian group (a set with a law of composition blahblah blah..) with respect to 2 operations + and *.   A semiring, is a ring which does not have to have additive inverse.  For example, the positive reals form a semiring R^+.
Suppose we have the following semirings R^+ characterized by the operations x+y= max(x,y) and x \cdot y = x+y.  We can create such a ring from the following semiring R, which is endowed with the “usual” arithmetic rules of + and  *  :

Let u,v \in R.

Then map u, v to x, y as follows:
u = h \log (x)

v = h \log(y)

Then, the analog of multiplication in the new semiring (the max plus algebra, as mathematicans call it), is addition in the R ring.

u+ v = h \log (xy)

What is the analog of addition of x, y? Consider:

h \log(exp(u/h)+ exp(v/h))= h\log(x +y)

In the limit where h goes to 0, this new operation is equivalent to taking \max(u,v). This operation, which yields a + a = max(a, a)=a is what characterizes idempotent semirings

But we are not limited to transferring operations like adding.  In general, the operation of integrating
\int \phi(X) D(X)

over a certain space of function X becomes

$latex  \sup (\phi ‘(X))$

by analogy to the addition rule.  We see that the inherently linear operation of integration becomes nonlinear in the new semiring (hint, Hamilton’s equaitons are coupled an in general nonlinear)
What does the laplace-fourier transform then looks like in the new max plus algebra?

Suppose we have space R^n where define the fourier transform ass
\psi'=\int_{R^n} e^{-ik\cdot x} \psi (x) dx^n
We note e^{ik \cdot x} is defined by the following functional equation,

f(x)f(y)= f(x+y)

By analogy, we turn this equation into its idempotent analog:
f(x)+f(y)= f(x+y)

In other words, exponentiation becomes addition.

Then the fourier transform’s integral becomes a supremum, it’s multiplication becomes addition.  In the new ring, the fourier transform becomes a legendre transform:

\psi'= \sup(-k\cdot x + \psi(x))

Suprisingly, to derive the path integral formulation, we make a fourier transform over momentum space (in fact, we make infinitely may fourier transforms).  In the limit of h goes to 0, the analog in the classical mechanics ring is to make the legendre transform, which reformulates the Hamiltonian into the Lagrangian.

The analogies to physics a far beyond quantum mechanics.  We can also idempotent mathematics arise in statistical mechanics, in the formulation of the Partition function.

We define

\omega(E)= \int \delta (E- H(r_i,p_i)) \prod dr_i \prod dp_i

In statistical mechanics, the partition function is defined as:

Z(\beta) = \int e^{-\beta E} \omega (E)dE

But this integral can be approximated using the method of steepest descent!  It turns out that the situation can once again be analyzed with idempotent methods.

Posted in Math, Physics | 1 Comment

Central Potential

Some notes about central potential

Periodic Motion condition

Posted in Classical Mechanics, Physics | Leave a comment

Hamiltonian Mechanics

Hamiltonian Mechanics

Some notes I have written about Hamiltonian mechanics a while ago.  Completely forgot to post it.

Posted in Classical Mechanics, Physics | Leave a comment

FM Snooper Circuit

This is such a cool circuit.  The single transistor in there acts as a emitter follower amplifier for audio input frequency, as a voltage controlled capacitance for FM modulation, and as a common base.

FM snooper

Posted in Engineering | Leave a comment