Uncertain Input Data Problems and the Worst Scenario Method

Ivan Hlaváěek Dr. , ... Ivo Babuška Dr. , in North-Holland Series in Applied Mathematics and Mechanics, 2004

Starshaped domains

This particular family of domains enables us to calculate explicitly the constants emerging in the estimates we are dealing with. In our setting, domains Ω1, Ω2, and Ω3 have the following properties: Ω1 is a domain starshaped with respect to the origin of the coordinate system,

(23.23) Ω 3 = { y 2 : y / α Ω 1 } ,

where α > 1 is a given constant, and Ω ¯ 1 Ω 2 Ω ¯ 2 Ω 3 Ω ¯ 3 B . The domain Ω2 can be N-unstable.

The domains Ω1 and Ω3 are related by the mapping ϰ(x) = αx, which maps Ω1 onto Ω3. Therefore, ϰ can be used to transform functions defined on Ω1 into functions defined on Ω3; namely u 1α (y) = u 1α (ϰ(x)) = u 1(x) and G 1α (y) = G 1α (ϰ(x)) = G 1(x), where G 1 G | Ω 1 .

By defining an auxiliary equation for u 1α and by comparing u 3 and u 1 α, we can, after some labor, infer the estimate.

Theorem 23.3

LetG ∈ [L 13)]2, GH 213) and let α ∈ (1, α 0]. Then

u 2 u 1 A , Ω 1 2 ( α 1 ) C

where C is a bounded positive parameter. Also,

u 3 u 2 A , Ω 2 2 ( α 1 ) C

with the same parameter C.

Proof. See (Babuška and Chleboun, 2002, Section 4).  

Remark 23.9

The formula for C can be evaluated so that the parameter can be calculated. To form an idea of what C looks like, let us specify that it includes | G | 1 , Ω 1 , | G | 1 , Ω 3 , | G | 2 , Ω 3 , G , Ω 13 , c A b 1 / 2 , b , and α 0.  

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0167593104800147

Ordinary differential equations

Brent J. Lewis , ... Andrew A. Prudil , in Advanced Mathematics for Engineering Students, 2022

2.2.3 Euler–Cauchy equation

The Euler–Cauchy equations of the form

(2.52)

can also be solved by an algebraic method by substituting y = x m into Eq. (2.52), yielding

x 2 m ( m 1 ) x m 2 + a x m x m 1 + b x m = 0 .

Omitting x m (where x 0 ) yields the auxiliary equation

(2.53)

Similarly, three cases arise for the general solution of Eq. (2.52), as shown in Table 2.3.

Table 2.3. General solution of the Euler–Cauchy equation.

Case Roots (Eq. (2.53) ) Basis (Eq. (2.52) ) General solution
I Distinct real m 1,m 2 x m 1 , x m 2 y = c 1 x m 1 + c 2 x m 2
II Real double root m = 1 2 ( 1 a ) x ( 1 a ) 2 y = ( c 1 + c 2 ln x ) x ( 1 a ) 2
III Complex conjugate
m 1 =μ + x μ cos ( ν ln x ) y = x μ [ A cos ( ν ln x ) + B sin ( ν ln x ) ]
m 2 =μ − x μ sin ( ν ln x )

Example 2.2.6

(Case I) Solve x 2 y 3 x y + 3 y = 0 .

Solution. The auxiliary equation is m 2 4 m + 3 = 0 . Hence, m 1 = 3 and m 2 = 1 from Eq. (2.53). Therefore, the general solution is y = c 1 x 3 + c 2 x . [answer]

Example 2.2.7

(Case II) Solve x 2 y x y + y = 0 .

Solution. The auxiliary equation has a double root m = 1 . Therefore, the general solution is y = ( c 1 + c 2 ln x ) x . [answer]

Example 2.2.8

(Case III) Solve x 2 y x y + 2 y = 0 .

Solution. The auxiliary equation is m 2 2 m + 2 = 0 , which implies m 1 , 2 = 1 ± i 2 ( 2 ) 2 4 = 1 ± i . The general solution is y = x [ ( A cos ( ln x ) + B sin ( ln x ) ] . [answer]

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128236819000101

Ordinary Differential Equations

Xin-She Yang , in Engineering Mathematics with Examples and Applications, 2017

12.3 Second-Order Equations

For second-order ordinary differential equations (ODEs), it is generally more tricky to find their general solutions. However, a special case with significantly practical importance and mathematical simplicity is the second-order linear differential equation with constant coefficients in the following form

(12.16) d 2 y d x 2 + b d y d x + c y ( x ) = f ( x ) ,

where the coefficients b and c are constants, and f ( x ) is a known function of x. Obviously, the more general form is

(12.17) a d 2 y d x 2 + b d y d x + c y ( x ) = f ( x ) .

However, if we divide both sides by a, we will reach our standard form. Here we assume a 0 . In a special case of a = 0 , it reduces to a first-order linear differential equation which has been discussed in the previous section. So we will start our discussion from (12.16).

A differential equation is said to be homogeneous if f ( x ) = 0 . For a given generic second-order differential equation (12.16), a function that satisfies the homogeneous equation

(12.18) d 2 y d x 2 + b d y d x + c y ( x ) = 0 ,

is called the complementary function, denoted by y c ( x ) . Obviously, the complementary function y c alone cannot satisfy the original equation (12.16) because there is no way to produce the required f ( x ) on the right-hand side. Therefore, we have to find a specific function, y ( x ) called the particular integral, so that it indeed satisfies the original equation (12.16). The combined general solution

(12.19) y ( x ) = y c ( x ) + y ( x ) ,

will automatically satisfy the original equation (12.16). The general solution of (12.16) consists of two parts: the complementary function y c ( x ) and the particular integral y ( x ) . We can obtain these two parts separately, and simply add them together because the original equation is linear, so their solutions are linear combinations.

First things first, how to obtain the complementary function? The general technique is to assume that it takes the form

(12.20) y c ( x ) = A e λ x ,

where A is a constant, and λ is an exponent to be determined. Substituting this assumed form into the homogeneous equation and using both y c = A λ e λ x and y c = A λ 2 e λ x , we have

(12.21) A λ 2 e λ x + b A λ e λ x + c A e λ x = 0 .

Since A e λ x should not be zero (otherwise, we have a trivial solution y c = 0 everywhere), we can divide all the terms by A e λ x , and we have

(12.22) λ 2 + b λ + c = 0 ,

which is the characteristic equation for the homogeneous equation. It is also called the auxiliary equation of the ODE. The solution of λ in this case is simply

(12.23) λ = b ± b 2 4 c 2 .

For simplicity, we can take A = 1 as it does not affect the results.

From quadratic equations, we know that there are three possibilities for λ. They are: I) two real distinct roots, II) two identical roots, and III) two complex roots.

In the case of two different roots: λ 1 λ 2 . Then, both e λ 1 x and e λ 2 x satisfy the homogeneous equation, so their linear combination forms the complementary function

(12.24) y c ( x ) = A e λ 1 x + B e λ 2 x ,

where A and B are constants.

In the special case of identical roots λ 1 = λ 2 , or

(12.25) c = λ 1 2 , b = 2 λ 1 ,

we cannot simply write

(12.26) y c ( x ) = A e λ 1 x + B e λ 1 x = ( A + B ) e λ 1 x ,

because it still only one part of the complementary function y 1 = C e λ 1 where C = A + B is just another constant. In this case, we should try a different combination, say, y 2 = x e λ 1 x to see if it satisfies the homogeneous equation or not. Since y 2 ( x ) = e λ 1 x + x λ 1 e λ 1 x , and y 2 ( x ) = λ 1 e λ 1 x + λ 1 e λ 1 x + x λ 1 2 e λ 1 x , we have

(12.27) y 2 ( x ) + b y 2 ( x ) + c y 2 ( x ) = e λ 1 x ( 2 λ 1 + x λ 1 2 ) e λ 1 x + b e λ 1 x ( 1 + x λ 1 ) + c x e λ 1 x = e λ 1 x [ ( 2 λ 1 + b ) ] + x e λ 1 x [ λ 1 2 + b λ 1 + c ] = 0 ,

where we have used b + 2 λ 1 = 0 (identical roots) and λ 1 2 + b λ 1 + c = 0 (the auxiliary equation). This indeed implies that x e λ 1 x also satisfies the homogeneous equation. Therefore, the complementary function for the identical roots is

(12.28) y c ( x ) = A e λ 1 x + B x e λ 1 x = ( A + B x ) e λ 1 x .

Now let us use this technique to solve a second-order homogeneous ODE.

Example 12.4

The second-order homogeneous equation

d 2 y d x 2 + 5 d y d x 6 y = 0 ,

has a corresponding auxiliary equation

λ 2 + 5 λ 6 = ( λ 2 ) ( λ + 3 ) = 0 .

It has two real roots

λ 1 = 2 , λ 2 = 3 .

So the complementary function is

y c ( x ) = A e 2 x + B e 3 x .

But for the differential equation

d 2 y d x 2 + 6 d y d x + 9 y ( x ) = 0 ,

its auxiliary equation becomes

λ 2 + 6 λ + 9 = 0 ,

which has two identical roots λ 1 = λ 2 = 3 . The complementary function in this case can be written as

y c ( x ) = ( A + B x ) e 3 x .

As complex roots always come in pairs, the case of complex roots would give

(12.29) λ 1 , 2 = α ± i β ,

where α and β are real numbers. The complementary function becomes

(12.30) y c ( x ) = A e ( α + i β ) x + B e ( α i β ) x = A e α x e i β x + B e α x e i β x = e α x [ A e i β x + B e i β x ] = e α x { A [ cos ( β x ) + i sin ( β x ) ] + B [ cos ( β x ) + i sin ( β x ) ] } = e α x [ ( A + B ) cos ( β x ) + i ( A B ) sin ( β x ) ] = e α x [ C cos β x + D sin β x ] ,

where we have used the Euler formula e θ i = cos θ + i sin θ and also absorbed the constants A and B into C = A + B and D = ( A B ) i .

A special case is when α = 0 , so the roots are purely imaginary. We have b = 0 , and c = β 2 . Equation (12.16) in this case becomes

(12.31) d 2 y d x 2 + β 2 y = 0 ,

which is a differential equation for harmonic motions such as the oscillations of a pendulum or a small-amplitude seismic detector. Here β is the angular frequency of the system.

Example 12.5

For a simple pendulum of mass m shown in Fig. 12.2, we now try to derive its equation of oscillations and its period.

Figure 12.2

Figure 12.2. A simple pendulum and its harmonic motion.

Since the motion is circular, the tension or the centripetal force T is thus given by

T = m v 2 L = m θ ˙ 2 L ,

where θ ˙ = d θ / d t is the angular velocity.

Forces must be balanced both vertically and horizontally. The component of T in the vertical direction is T cos θ which must be equivalent to mg, though in the opposite direction. Here g is the acceleration due to gravity. That is

T cos θ = m g .

Since θ is small or θ 1 , we have cos θ 1 . This means that T m g .

In the horizontal direction, Newton's second law F = m a implies that the horizontal force T sin θ must be equal to the mass m times the acceleration L d 2 θ d t 2 . Now we have

m ( L d 2 θ d t 2 ) = T sin θ m g sin θ .

Dividing both sides by mL, we have

d 2 θ d t 2 + g L sin θ = 0 .

Since θ is small, we have sin θ θ . Therefore, we finally have

d 2 θ d t 2 + g L θ = 0 .

This is the equation of motion for a simple pendulum. From equation (12.31), we know that the angular frequency is ω 2 = g / L or ω = g L . Thus the period of the pendulum is

(12.32) T = 2 π ω = 2 π L g .

We can see that the period is independent of the bob mass. For L = 1 m and g = 9.8 undefined m/s 2 , the period is approximately T = 2 π 1 9.8 2 seconds.

Up to now, we have found only the complementary function. Now we will try to find the particular integral y ( x ) for the original non-homogeneous equation (12.16). For particular integrals, we do not intend to find the general form; any specific function or integral that satisfies the original equation (12.16) will do. Before we can determine the particular integral, we have to use some trial functions, and such functions will have strong similarity to function f ( x ) . For example, if f ( x ) is a polynomial such as x 2 + α x + β , we will try a similar form y ( x ) = a x 2 + b x + c and try to determine the coefficients. Let us demonstrate this by an example.

Example 12.6

In order to solve the differential equation

d 2 y d x 2 + 5 d y d x 6 y = x 2 ,

we first find its complementary function. From the earlier example, we know that the complementary function can be written as

y c ( x ) = A e 2 x + B e 3 x .

For the particular integral, we know that f ( x ) = x 2 , so we try the form

y = a x + b .

Thus y = a and y = 0 . Substituting them into the original equation, we have

0 + 5 a 6 ( a x + b ) = x 2 ,

or

( 6 a ) x + ( 5 a 6 b ) = x 2 .

As this equality must be true for any x, so the coefficients of the same power of x on both sides of the equation should be equal. That is

6 a = 1 , ( 5 a 6 b ) = 2 ,

which gives a = 1 6 , and b = 7 36 . So the general solution becomes

y ( x ) = A e 2 x + B e 3 x x 6 + 7 36 .

Similarly, if f ( x ) = e α x , we will try to y ( x ) = a e α x so as to determine a. In addition, for f ( x ) = sin α x or cos α x , we will attempt the general form y ( x ) = a cos α x + b sin α x .

Again let us demonstrate this through an example.

Example 12.7

The motion of a damped pendulum is governed by

d 2 y d t 2 + 4 d y d t + 5 y = 40 cos ( 3 t ) .

Its auxiliary equation becomes

λ 2 + 4 λ + 5 = 0 ,

which has two complex solutions

λ = 4 ± 4 2 4 × 5 2 = 2 ± i .

Therefore, the complementary function becomes

y c ( x ) = e 2 t ( A cos t + B sin t ) ,

where A and B are two undetermined constants.

Now we try to get the particular integral. Since f ( t ) = 40 cos ( 3 t ) , so we try the similar form

y = C cos ( 3 t ) + D sin ( 3 t ) .

Since

y = 3 C sin ( 3 t ) + 3 D cos ( 3 t ) , y = 9 C cos ( 3 t ) 9 D sin ( 3 t ) ,

we have

9 [ C cos ( 3 t ) + D sin ( 3 t ) ] + 4 [ 3 C sin ( 3 t ) + 3 D cos ( 3 t ) ] + 5 [ C cos ( 3 t ) + D sin ( 3 t ) ] = 40 cos ( 3 t ) ,

which leads to

( 9 C + 12 D + 5 C ) cos ( 3 t ) + ( 9 D 12 C + 5 D ) sin ( 3 t ) = 40 cos ( 3 t ) ,

which requires

9 C + 12 D + 5 C = 40 , 9 D 12 C + 5 D = 0 ,

or

12 D 4 C = 40 , 12 C 4 D = 0 .

This gives

C = 1 , D = 3 .

The particular solution is

y = 3 sin ( 3 t ) cos ( 3 t ) .

Therefore, the general solution becomes

y = e 2 t ( A cos x + B sin x ) + 3 sin ( 3 t ) cos ( 3 t ) .

As time is sufficiently long (i.e., t ), we can see that the first two terms will approach zero (since e 2 t 0 ). Therefore, the solution will be dominated by the last two terms. That is, the long-term behavior becomes

y ( r ) = 3 sin ( 3 t ) cos ( 3 t ) .

In an earlier chapter about trigonometry, we have shown that the following identity holds

a cos θ + b sin θ = R cos ( θ ϕ ) ,

where

R = a 2 + b 2 , tan ϕ = b a .

Therefore, we have

y = 3 sin ( 3 t ) cos ( 3 t ) = R cos ( t ϕ ) ,

where R = ( 1 ) 2 + 3 2 = 10 and tan ϕ = 3 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128097304000161

Differential equations and difference equations

Mary Attenborough , in Mathematics for Electrical Engineering and Computing, 2003

Example 14.8

Solve the differential equation 5y″ + 6y′ +5y = 6 cos(t), where y (0) = 0 and y′ (0) = 0.

Step 1

To find the complementary function we solve the homogeneous equation 5y″ + 6y′ + 5y = 0. Trying solutions of the form y = A e λt leads to the auxiliary equation 2 + 6λ + 5 = 0. Notice that a quick way to get the auxiliary equation is to 'replace' y″ by λ2, y′ by A, and y by 1. The auxiliary equation has solutions

λ = 6 ± 36 100 10 = 6 ± 64 10 = 0.6 ± j0 .8 .

Comparing this with λ = k ±0 gives k = −0.6, ω0 = 0.8. This means that the complementary function has the form

y = e k t ( A cos ( ω 0 t ) + B sin ( ω 0 t ) )

(given as Case (2) in the general method), with k = −0.6 and ω0 = 0.8. This gives

y = e 0 .6 t ( A cos ( 0.8 t ) + B sin ( 0.8 t ) ) .

Step 2

As f(t) = 6 cos(t), from Table 14.1 we decide to try a particular solution of the form y = c cos(t) + d sin(t). Then

y = c sin ( t ) + d cos ( t )

and

y = c cos ( t ) + d sin ( t ).

Substituting in

5 y + 6 y + 5 y = 6 cos ( t )

we find

5 ( c cos ( t ) d sin ( t ) ) + 6 ( c sin ( t ) + d cos ( t ) ) + 5 ( c cos ( t ) + d sin ( t ) ) = 6 cos ( t ) ( 5 c + 6 d + 5 c ) cos ( t ) + ( 5 d 6 c + 5 d ) sin ( t ) = 6 cos ( t ) 6 d cos ( t ) 6 c sin ( t ) = 6 cos ( t ) .

As we want this to be an identity we equate the coefficients of cos(t) and the coefficients of sin (t) and get the two equations

6 d = 6 d = 1 6 c = 0 c = 0.

Hence, a particular solution is y = sin(t).

Step 3

The general solution is given by the sum of the complementary function and a particular solution, so

y = e 0 .6 t ( A sin ( 0.8 t ) + B cos ( 0.8 t ) ) + sin ( t ) .

Step 4

Use the given initial conditions to find values for the constants A and B. Substituting y = 0 when t = 0 into

y = e 0 .6 t ( A cos ( 0.8 t ) + B sin ( 0.8 t ) ) + sin ( t )

then

0 = e 0 ( A cos ( 0 ) + B sin ( 0 ) ) + sin ( 0 ) 0 = A .

To use the other condition, that y′ = 0 when t = 0, we need to differentiate the general solution to find an expression for y'. Differentiating (using the product rule):

y = e 0 .6 t ( A cos ( 0.8 t ) + B sin ( 0.8 t ) ) + sin ( t ) y = 0.6 e 0 .6t ( A cos ( 0.8 t ) + B sin ( 0.8 t ) ) + e 0 .6 t ( 0.8 A sin ( 0.8 t ) + 0.8 B cos ( 0.8 t ) ) + cos ( t )

and using y′ = 0 when t = 0 gives

0 = 0.6 A + 0.8 B + 1.

We have already found A = 0 from the first condition, so

B = 1 / ( 0.8 ) = 1.25.

Therefore, the solution is

y = 1.25 e 0 .6 t sin ( 0.8 t ) + sin ( t ) .

Check

y = 1.25 e 0 .6 t sin ( 0.8 t ) + sin ( t ) y = 0.75 e 0 .6 t sin ( 0.8 t ) e 0 .6 t cos ( 0.8 t ) + cos ( t ) y = 0.45 e 0 .6 t sin ( 0.8 t ) + 0.6 e 0 .6 t cos ( 0.8 t ) + 0.6 e 0 .6 t cos ( 0.8 t ) + 0.8 e 0 .6 t sin ( 0.8 t ) sin ( t ) .

Substitute into

5 y + 6 y + 5 y = 6 cos ( t )

giving

5 ( 0.45 e 0.6 t sin ( 0.8 t ) + 0.6 e 0 .6 t cos ( 0.8 t ) + 0.6 e 0 .6 t cos ( 0.8 t ) + 0.8 e 0.6 t sin ( 0.8 t ) sin ( t ) ) + 6 ( 0.75 e 0.6 t sin ( 0.8 t ) e 0.6 t cos ( 0.8 t ) + cos ( t ) ) + 5 ( 1.25 e 0.6 t sin ( 0.8 t ) + sin ( t ) ) = 6 cos ( t ) e 0.6 t sin ( 0.8 t ) ( 1.75 + 4.5 6.25 ) + e −0.6 t cos ( 0.8 t ) ( 6 6 ) + sin ( t ) ( 5 + 5 ) + 6 cos ( t ) = 6 cos ( t )

which is true for all values of t.

We can also check that the solution satisfies the given initial conditions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750658553500401

Vibrations and the Harmonic Oscillator

J.E. House , in Fundamentals of Quantum Mechanics (Third Edition), 2018

6.2 Linear Differential Equations with Constant Coefficients

Because differential equations are required when representing motion, some familiarity with that branch of mathematics is beneficial when studying quantum mechanics. This section provides a very brief introduction to such mathematics. However, in this section, the results of several important theorems in differential equations will be presented. Because of the nature of this book, the theorems will be presented in an operational manner and used without proof. The interested reader should consult a text on differential equations for more details.

A linear differential equation with constant coefficients is an equation of the form

(6.7) a n x d n y d x n + a n 1 x d n 1 y d x n 1 + + a 1 x dy d x + a 0 x y = F x

in which the constants a 0(x), a 1(x), …, and F(x) have values that change only with changes in x. A particularly important equation of this type is the second-order case:

(6.8) a 2 x d 2 y d x 2 + a 1 x dy d x + a 0 x y = F x

The differential operator D is defined as

(6.9) D = d d x , D 2 = d 2 d x 2 , etc .

When an operator meets the conditions represented as

(6.10) D f + g = D f + D g

and that which can be shown by the relationship

(6.11) D n f + g = D n f + D n g

the operator is called a linear operator.

From Eq. (6.8), it can be seen that a second-order linear differential equation can be written in the operator notation as

(6.12) a 2 D 2 y + a 1 Dy + a 0 = F x

The solution of an equation of this form is obtained by considering an auxiliary equation , which is obtained by writing an equation in the form

(6.13) f D y = 0

when the general differential equation is written as

(6.14) f D y = F x

The auxiliary equation is called the complementary equation, and its solution is known as the complementary solution. The complete solution of the differential equation is the sum of the particular and general solution of the complementary equation. These principles will be illustrated by working through the following example:

Suppose it is necessary to find the general solution of

(6.15) d 2 y d x 2 5 dy d x + 4 y = 10 x

In operator form, this equation can be written as

(6.16) D 2 5 D + 4 y = 10 x

A solution of this type of equation is frequently of the form

(6.17) y = C 1 e ax + C 2 e b x

in which a and b are to be determined by the solutions of the complementary equation,

(6.18) m 2 5 m + 4 = 0

Therefore, factoring the polynomial gives

(6.19) m 4 m 1 = 0

from which it can be seen that m  =   4 and m  =   1. In this case, the general solution of Eq. (6.15) is

(6.20) y = C 1 e x + C 2 e 4 x

It can easily be verified that this is the solution by using it in the complementary equation. If this is the solution, then

Dy = dy d x = C 1 e x + 4 C 2 e 4 x

and the second derivative can be written as

D 2 y = d 2 y d x 2 = C 1 e x + 16 C 2 e 4 x

Therefore, the auxiliary equation becomes

(6.21) D 2 5 D + 4 y = D 2 y 5 Dy + 4 y = 0

By substituting for the first and second derivatives, it is found that

(6.22) C 1 e x + 16 C 2 e 4 x 5 C 1 e x + 4 C 2 e 4 x + 4 C 1 e x + C 2 e 4 x = 0

By simplification, it can be seen that this equation reduces to 0   =   0. However, it can also be shown that a particular solution is

(6.23) y = 5 2 x + 25 8

It will now be shown that this particular solution also satisfies the general equation. From Eq. (6.23) it is found that

(6.24) Dy = dy d x = 5 2 and D 2 y = d 2 y d x 2 = 0

When these values are substituted in Eq. (6.15) the result is

5 5 2 + 4 5 x 2 + 25 8 = 10 x

10 x = 10 x

Therefore, the complete solution of Eq. (6.15) is the sum of the two expressions,

(6.25) y = C 1 e x + C 2 e 4 x + 5 2 x + 25 8

In most problems, it is sufficient to obtain a general solution, and "singular" solutions that do not describe the physical behavior of the system are ignored.

It must be mentioned that there are two arbitrary constants that characterize the solution that was obtained. Of course, the solution of an nth order equation would result in n constants. In quantum mechanics, these constants are determined by the physical constraints of the system (known as boundary conditions), as was shown in Chapter 3.

The equation

(6.26) D 2 y + y = 0

has the auxiliary equation that can be written as

(6.27) m 2 + 1 = 0

From this equation, it can be seen that m 2  =     1 and m  =   ± i. Therefore, the general solution is

(6.28) y = C 1 e i x + C e 2 e i x

At this point, it is useful to remember that

(6.29) d d x sin x = cos x

and

(6.30) d d x cos x = sin x = d 2 d x 2 sin x

From these relationships, it is evident that

(6.31) d 2 d x sin x + sin x = 0

and the solution y  =   sin x satisfies the equation. In fact, if it is assumed that a solution to Eq. (6.26) is of the form

(6.32) y = A sin x + B cos x

then

(6.33) Dy = A cos x B sin x

and

(6.34) D 2 y = A sin x B cos x

Therefore,

(6.35) D 2 y + y = A sin x B cos x + A sin x + B cos x = 0

Therefore, Eq. (6.35) shows the solution represented by Eq. (6.32) satisfies Eq. (6.26). A differential equation can have only one general solution, so the solution shown in Eqs. (6.28) and (6.32) must be equal:

(6.36) y = C 1 e i x + C 2 e i x = A sin x + B cos x

It can be seen that when x  =   0, C 1  + C 2  = B. Differentiating Eq. (6.36) gives

(6.37) dy d x = C 1 i e i x C 2 i e i x = A cos x B sin x

However, when x  =   0, sin x  =   0. It is apparent, then, that

(6.38) i C 1 C 2 = A

By substituting for A and B and simplifying, the result can be shown as

(6.39) C 1 e i x + C 2 e i x = C 1 cos x + i sin x + C 2 cos x i sin x

If C 2  =   0 and C 1  =   1, then

(6.40) e i x = cos x + i sin x

Therefore, if C 2  =   1 and C 1  =   0, the result can be shown as

(6.41) e i x = cos x i sin x

The relationships shown in Eqs. (6.40) and (6.41) are known as Euler's formulas.

Suppose it is necessary to solve the differential equation

(6.42) y + 2 y + 5 y = 0

When written in operator form the result is

(6.43) D 2 + 2 D + 5 y = 0

Therefore, the auxiliary equation is written as

(6.44) m 2 + 2 m + 5 = 0

and its roots are found by using the quadratic formula,

(6.45) m = 2 ± 4 20 2 = 1 ± 2 i

Therefore, the solution of Eq. (6.42) is

(6.46) y = C 1 e 1 + 2 i x + C 2 e 1 2 i x

By expanding the exponentials, this equation can also be written as

(6.47) y = C 1 e x e 2 i x + C 2 e x e 2 i x = e x C 1 e 2 i x + C 2 e 2 i x

Using Euler's formulas to express the exponential functions in terms of sin and cos, the result is

(6.48) y = e x A sin 2 x + B cos 2 x

In general, if the auxiliary equation has roots a  ± bi, the solution of the differential equation has the form

(6.49) y = e ax A sin b x + B cos b x

An equation having the form

(6.50) y + a 2 y = 0

was obtained in solving the particle in the one-dimensional box problem in quantum mechanics (see Chapter 3). The auxiliary equation is

(6.51) m 2 + a 2 = 0

and it has the solutions m  = ia. Therefore, the solution of Eq. (6.50) can be written as

(6.52) y = C 1 e a i x C 2 e a i x = A cos ax + B sin ax

This is exactly the form of the solution found when solving the particle in a one-dimensional box. The boundary conditions of a particular system make it possible to evaluate the constants in the solution, as was shown in Chapter 3.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128092422000061

Differential equations

Huw Fox , Bill Bolton , in Mathematics for Engineers and Technologists, 2002

Exceptional cases of particular integrals

There are situations when the obvious form of function to be tried to obtain the particular integral yields no result because when it is substituted in the differential equation we obtain 0 = 0. This occurs when the right-hand side of the non-homogeneous differential equation consists of a function that is also a term in the complementary function. To illustrate this, consider the differential equation:

d 2 y d x 2 + d y d x 2 y = e 2 x

The complementary function is y = A e−2x + B e x . For the particular integral, if we try the solution y = A e kx we obtain:

4 A e k x 2 A e k x 2 A e k x = e 2 x

and so no solution for A. In such cases we have to try something different.

T h e b a s i c r u l e i s t o m u l t i p l y t h e t r i a l s o l u t i o n b y x .

Thus we try y = Ax e kx . This gives, for the above differential equation:

( 2 A e k x + 4 A x e k x 2 A e k x ) + ( A e k x 2 A e k x ) 2 A x e k x = e 2 x

Thus k = −2 and −2A + 4Ax − 2A + A − 2Ax − 2Ax = 1. Equating constants gives 3A = 1, equating the x coefficients gives 0 = 0, and so the particular integral is:

y p = 1 3 e 2 x

Example

Determine the general solution of the differential equation:

d 2 y d x 2 3 d y d x 10 y = 4 e 2 x

The corresponding homogeneous differential equation is:

d 2 y d x 2 3 d y d x 10 y = 0

Trying y = A e sx as a solution gives the auxiliary equation:

s 2 3 s 10 = 0

This can be factored as (s − 5)(s + 2) = 0 and so the complementary function is y c = A e5x + B e−2x . The right-hand side of the non-homogeneous differential equation is the sum of two terms for which the trial functions would be C and Dx e kx . We thus try the sum of these. Thus:

D k 2 x e k x + D k e k x + D k e k x 3 D k x e k x 3 D e k x 10 ( C + D x e k x ) = 4 e 2 x

Equating exponential terms gives k = −2, 4Dx − 2D − 2D + 6Dx − 3D − 10Dx = −1 and so D = 1/7. Equating constants gives −10C = 4 and so C = −4/10. Thus the particular integral is −(4/10) + (1/7) e2x . The general solution is therefore:

y = A e 5 x + B e 2 x 4 10 + 1 7 x e 2 x

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750655446500060

Advances in Atomic, Molecular, and Optical Physics

Erik Torrontegui , ... Juan Gonzalo Muga , in Advances In Atomic, Molecular, and Optical Physics, 2013

2.1 Invariant-Based Inverse Engineering

Lewis-Riesenfeld invariants.The Lewis and Riesenfeld (1969) theory is applicable to a quantum system that evolves with a time-dependent Hermitian Hamiltonian H ( t ) , which supports a Hermitian dynamical invariant I ( t ) satisfying

(1) i I ( t ) t - [ H ( t ) , I ( t ) ] = 0 .

Therefore its expectation values for an arbitrary solution of the time-dependent Schrödinger equation i t Ψ ( t ) = H ( t ) Ψ ( t ) do not depend on time. I ( t ) can be used to expand Ψ ( t ) as a superposition of "dynamical modes" ψ n ( t ) ,

(2) Ψ ( t ) = n c n ψ n ( t ) , ψ n ( t ) = e i α n ( t ) ϕ n ( t ) ,

where n = 0 , 1 , ; c n are time-independent amplitudes, and ϕ n ( t ) are orthonormal eigenvectors of the invariant I ( t ) ,

(3) I ( t ) = n ϕ n ( t ) λ n ϕ n ( t ) .

The λ n are real constants, and the Lewis-Riesenfeld phases are defined as (Lewis and Riesenfeld, 1969)

(4) α n ( t ) = 1 0 t ϕ n ( t ) i t - H ( t ) ϕ n ( t ) dt .

We use for simplicity a notation for a discrete spectrum of I ( t ) but the generalization to a continuum or mixed spectrum is straightforward. We also assume a non-degenerate spectrum. Non-Hermitian invariants and Hamiltonians have been considered for example in Gao et al. (1991, 1992), Lohe (2009), Ibáñez et al. (2011).

Inverse engineering.Suppose that we want to drive the system from an initial Hamiltonian H ( 0 ) to a final one H ( t f ) , in such a way that the populations in the initial and final instantaneous bases are the same, but admitting transitions at intermediate times. To inverse engineer a time-dependent Hamiltonian H ( t ) and achieve this goal, we may first define the invariant through its eigenvalues and eigenvectors. The Lewis-Riesenfeld phases α n ( t ) may also be chosen as arbitrary functions to write down the time-dependent unitary evolution operator U

(5) U = n e i α n ( t ) ϕ n ( t ) ϕ n ( 0 ) .

U obeys i U ̇ = H ( t ) U ,where the dot means time derivative. Solving formally this equation for H ( t ) = i U ̇ U , we get

(6) H ( t ) = - n ϕ n ( t ) α ̇ n ϕ n ( t ) + i n t ϕ n ( t ) ϕ n ( t ) .

According to Eq. (6), for a given invariant there are many possible Hamiltonians corresponding to different choices of phase functions α n ( t ) . In general I ( 0 ) does not commute with H ( 0 ) , so the eigenstates of I ( 0 ) , ϕ n ( 0 ) , do not coincide with the eigenstates of H ( 0 ) . H ( t f ) does not necessarily commute with I ( t f ) either. If we impose [ I ( 0 ) , H ( 0 ) ] = 0 and [ I ( t f ) , H ( t f ) ] = 0 , the eigenstates will coincide, which guarantees a state transfer without final excitations. In typical applications the Hamiltonians H ( 0 ) and H ( t f ) are given, and set the initial and final configurations of the external parameters. Then we define I ( t ) and its eigenvectors accordingly, so that the commutation relations are obeyed at the boundary times and, finally, H ( t ) is designed via Eq. (6). While the α n ( t ) may be taken as fully free time-dependent phases in principle, they may also be constrained by a pre-imposed or assumed structure of H ( t ) . Sections 3–5 present examples of how this works for expansions, transport, and internal state control.

A generalization of this inverse method for non-Hermitian Hamiltonians was considered in Ibáñez et al. (2011). Inverse engineering was applied to accelerate the slow expansion of a classical particle in a time-dependent harmonic oscillator without final excitation. This system may be treated formally as a quantum two-level system with non-Hermitian Hamiltonian (Gao et al., 1991,1992).

Quadratic in momentum invariants. Lewis and Riesenfeld (1969) paid special attention to the time-dependent harmonic oscillator and its invariants quadratic in position and momentum. Later on Lewis and Leach (1982) found, in the framework of classical mechanics, the general form of the Hamiltonian compatible with quadratic-in-momentum invariants, which includes non-harmonic potentials. This work, and the corresponding quantum results of Dhara and Lawande (1984), constitutes the basis of this section.

A one-dimensional Hamiltonian with a quadratic-in-momentum invariant must have the form H = p 2 / 2 m + V ( q , t ) , 3 with the potential (Lewis and Leach, 1982; Dhara and Lawande, 1984)

(7) V ( q , t ) = - F ( t ) q + m 2 ω 2 ( t ) q 2 + 1 ρ ( t ) 2 U q - q c ( t ) ρ ( t ) .

ρ , q c , ω , and F are arbitrary functions of time that satisfy the auxiliary equations

(8) ρ ¨ + ω 2 ( t ) ρ = ω 0 2 ρ 3 ,

(9) q ¨ c + ω 2 ( t ) q c = F ( t ) / m ,

where ω 0 is a constant. Their physical interpretation will be explained below and depends on the operation. A quadratic-in- p dynamical invariant is given, up to a constant factor, by

(10) I = 1 2 m ρ p - m q ̇ c - m ρ ̇ q - q c 2 + 1 2 m ω 0 2 q - q c ρ 2 + U q - q c ρ .

Now α n in Eq. (4) satisfies (Lewis and Riesenfeld, 1969; Dhara and Lawande, 1984)

(11) α n = - 1 0 t d t λ n ρ 2 + m ( q ̇ c ρ - q c ρ ̇ ) 2 2 ρ 2 ,

and the function ϕ n can be written as (Dhara and Lawande, 1984)

(12) ϕ n ( q , t ) = e im ρ ̇ q 2 / 2 ρ + ( q ̇ c ρ - q c ρ ̇ ) q / ρ 1 ρ 1 / 2 Φ n q - q c ρ = : σ

in terms of the solution Φ n ( σ ) (normalized in σ -space) of the auxiliary Schrödinger equation

(13) - 2 2 m 2 σ 2 + 1 2 m ω 0 2 σ 2 + U ( σ ) Φ n = λ n Φ n .

The strategy of invariant-based inverse engineering here is to design ρ and q c first so that I and H commute at initial and final times, except for launching or stopping atoms as in Torrontegui et al. (2011). Then H is deduced from Eq. (7). Applications will be discussed in Sections 3 and 4.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124080904000025

Higher-Order Differential Equations

Martha L. Abell , James P. Braselton , in Differential Equations with Mathematica (Fourth Edition), 2016

4.6.1 Second-Order Cauchy-Euler Equations

Consider the second-order homogeneous Cauchy-Euler equation

(4.22) a x 2 y + b x y + c y = 0 ,

where a≠0. Notice that because the coefficient of y″ is zero if x = 0, we must restrict our domain to either x > 0 or x < 0 in order to ensure that the theory of second-order equations stated in Section 4.1 holds.

Suppose that y = x m , x > 0, for some constant m. Substitution of y = x m with derivatives y′ = mx m−1 and y″ = m(m − 1)x m−2 into equation (4.22) yields

a x 2 y + b x y + c y = a m ( m 1 ) x m 2 + b m x m 1 + c x m = x m a m ( m 1 ) + b m + c = 0 .

Then, y = x m is a solution of equation (4.22) if m satisfies

(4.23) a m ( m 1 ) + b m + c = 0 ,

which is called the characteristic equation(or auxiliary equation ) associated with the Cauchy-Euler equation of order two. The solutions of the characteristic equation completely determine the general solution of the homogeneous Cauchy-Euler equation of order two. Let m 1 and m 2 denote the two solutions of the characteristic (or auxiliary) equation (4.23):

m 1 , 2 = 1 2 a ( b a ) ± ( b a ) 2 4 a c .

Hence, we can obtain two real roots, one repeated real root, or a complex conjugate pair depending on the values of a, b, and c. We state a general solution that corresponds to the different types of roots.

Theorem 8

Let m 1 and m 2 be the solutions of equation (4.23).

1.

If m 1m 2 are real and distinct, two linearly independent solutions of equation (4.22) are y 1 = x m 1 and y 2 = x m 2 ; a general solution of (4.22) is

y = c 1 x m 1 + c 2 x m 2 , x > 0 .

2.

If m 1 = m 2, two linearly independent solutions of equation (4.22) are y 1 = x m 1 and y 2 = x m 1 ln x ; a general solution of (4.22) is

y = c 1 x m 1 + c 2 x m 1 ln x , x > 0 .

3.

If m 1, 2 = α ± βi, β≠0, two linearly independent solutions of equation (4.22) are y 1 = x α cos β ln x and y 2 = x α sin β ln x ; a general solution of (4.22) is

y = x α c 1 cos β ln x + c 2 sin β ln x , x > 0 .

Example 4.6.1

Solve each of the following equations: (a) 3x 2 y″ − 2xy′ + 2y = 0, x > 0; (b) x 2 y″ − xy′ + y = 0, x > 0; (c) x 2 y″ − 5xy′ + 10y = 0, x > 0.

Solution

(a) If y = x m , y′ = mx m−1, and y″ = m(m − 1)x m−2, substitution into the differential equation yields

3 x 2 y 2 x y + 2 y = 3 x 2 m ( m 1 ) x m 2 2 x m x m 1 + 2 x m = x m 3 m ( m 1 ) 2 m + 2 = 0 .

Hence, the auxiliary equation is

3 m ( m 1 ) 2 m + 2 = 3 m ( m 1 ) 2 ( m 1 ) = ( 3 m 2 ) ( m 1 ) = 0

with roots m 1 = 2/3 and m 2 = 1. Therefore, a general solution is y = c 1 x 2/3 + c 2 x. We obtain the same results with DSolve. Entering

Clear[x, y]gensol=DSolve[3x 2y"[x]−2xy′[x]+2y[x]==0, y[x], x]

{{y[x] → x 2/3 C[1] + xC[2]}}

finds a general solution of the equation, naming the result gensol, and then entering

toplot=Table[gensol[[1, 1, 2]]/.{C[1]→i, C[2]→j}, {i, −2, 2, 2}, {j, −2, 2, 2}];

p1=Plot[toplot, {x, 0, 12}, PlotRange→{−6, 6}, AspectRatio→1, AxesLabel→{x, y}, PlotLabel→"(a)"]

defines toplot to be the list of functions obtained by replacing c[1] in gensol[[1,1,2]] by − 2, 0, and 2 and C[2] in gensol[[1,1,2]] by − 2, 0, and 2, and graphs the set of functions toplot on the interval [0, 12]. See Figure 4-26 (a).

Figure 4-26. (a) Various solutions of 3x 2 y″ − 2xy′ + 2y = 0, x &gt; 0. (b) Various solutions of x 2 y″ − xy′ + y = 0, x &gt; 0. (c) Various solutions of x 2 y″ − 5xy′ + 10y = 0, x &gt; 0

(b) In this case, the auxiliary equation is

m ( m 1 ) m + 1 = m ( m 1 ) ( m 1 ) = ( m 1 ) 2 = 0

with root m = 1 of multiplicity 2. Hence, a general solution is y = c x + c 2 x ln x . As in the previous example, we see that we obtain the same results with DSolve. See Figure 4-26 (b).

Clear[x, y]gensol=DSolve[x 2y"[x]−xy′[x]+y[x]==0, y[x], x]

{{y[x] → xC[1] + xC[2]Log[x]}}

toplot=Table[gensol[[1, 1, 2]]/.{C[1]→i, C[2]→j}, {i, −2, 2, 2}, {j, −2, 2, 2}];

p2=Plot[toplot, {x, 0, 10}, PlotRange→{−5, 5}, AspectRatio→1, AxesLabel→{x, y}, PlotLabel→"(b)"]

(c) The auxiliary (characteristic) equation is given by

m ( m 1 ) 5 m + 10 = m 2 6 m + 10 = 0

with complex conjugate roots m 1 , 2 = 1 2 6 ± 36 40 = 3 ± i . Thus, a general solution is y = x 3 c 1 cos ln x + c 2 sin ln x .

Again, we see that we obtain equivalent results with DSolve. First, we find a general solution of the equation, naming the resulting output gensol.

Clear[x, y]gensol=DSolve[x 2y"[x]−5xy′[x]+10y[x]==0, y[x], x]

{{y[x] → x 3 C[2]Cos[Log[x]] + x 3 C[1]Sin[Log[x]]}}

Now, we define y(x) to be the general solution obtained in gensol. (The same result is obtained with Part by entering y[x_]=gensol[[1,1,2]].)

y[x_]=x 3 C[2]Cos[Log[x]]+x 3 C[1]Sin[Log[x]]

x 3 C[2]Cos[Log[x]] + x 3 C[1]Sin[Log[x]]

To find the values of C[1] and C[2] so that the solution satisfies the initial conditions y(1) = a and y′(1) = b, we use Solve and name the resulting list cvals.

cvals=Solve[{y[1]==a, y′[1]==b}, {C[1], C[2]}]

{{C[1] →−3a + b, C[2] → a}}

The solution to the initial-value problem

x 2 y 5 x y + 10 y = 0 y ( 1 ) = a , y ( 1 ) = b

is obtained by replacing C[1] and C[2] in y(x) by the values found in cvals.

y[x_]=y[x]/.cvals[[1]]

ax 3Cos[Log[x]] + (−3a + b)x 3Sin[Log[x]]

Note that when you enter the following Plot command, Mathematica may display several error messages because each solution is undefined if x = 0. Nevertheless, the resulting graphs are displayed correctly.

This solution is then graphed for various initial conditions in Figure 4-26 (c).

toplot=Table[y[x], {a, −2, 2, 2}, {b, −2, 2, 2}];

p3=Plot[toplot, {x, 0, 2}, PlotRange→All, AspectRatio→1, AxesLabel→{x, y}, PlotLabel→"(c)"]

Show[GraphicsRow[{p1, p2, p3}]]

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128047767000048

Comment on: The (G′/G)-expansion method for the nonlinear lattice equations [Commun Nonlinear Sci Numer Simulat 17 (2012) 3490–3498]

İsmail Aslan , in Communications in Nonlinear Science and Numerical Simulation, 2012

Now, we make the following observations:

(i)

Eqs. (1) and (2) are special cases of Eq. (4). Namely, taking a  = α, b  =   α, and c  = β in Eq. (4) leads to Eq. (1); taking a  =   α, b  =   0, and c  =   1 in Eq. (4) leads to Eq. (2). Hence, Eq. (4) includes both Eq. (1) and Eq. (2).

(ii)

The basic (G′/G)-expansion method uses the auxiliary equation G + λ G + μ G = 0 where λ and μ are arbitrary constants. Recently, Aslan [5] demonstrated the redundancy of the parameter λ. In other words, one can make the assumption λ   =   0 without loss of generality. This approach reduces the number of the parameters at the outset without affecting the generality of the results. Also, it should be clear that (5) is a further generalization of (3) in the meaning that the sum goes from l  =   m to l  = m instead of from l  =   0 to l  = m where m is a positive integer. This fact indicates that the solutions obtained by (5) include the solutions obtained by (3).

(iii)

It is also worth to mention here that the so-called two-component Volterra lattice equations

(6) du n dt = u n ( v n - v n - 1 ) , dv n dt = v n ( u n + 1 - u n ) ,

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1007570412001773

Analytic solutions of a second order iterative functional differential equation

Pingping Zhang , Lufang Mi , in Applied Mathematics and Computation, 2009

In this section, we prove the existence of analytic solution to Eq. (1.1).

Theorem 3.1

Under one of the conditions in Theorems 2.1–2.3, Eq. (1.5) has an analytic solution of the form y ( z ) = g ( θ g - 1 ( z ) ) in a neighborhood of the number γ , where g ( z ) is an analytic solution of (1.7) .

Proof

By Theorems 2.1–2.3, we may find an analytic solution g ( z ) of the auxiliary equation (1.7) in the form of (2.1) such that g ( 0 ) = γ , g ( 0 ) = η 0 . Clearly the inverse g - 1 ( z ) exists and is analytic in a neighborhood of g ( 0 ) = γ . If we define y ( z ) = g ( θ g - 1 ( z ) ) . Then

y ( z ) = θ g ( θ g - 1 ( z ) ) ( g - 1 ( z ) ) = θ g ( θ g - 1 ( z ) ) g ( g - 1 ( z ) ) ,

c 1 ( y ( z ) - az ) + c 2 ( y ( z ) - a ) + c 3 y ( z ) = c 1 ( g ( θ z ) - ag ( z ) ) + c 2 θ g ( θ g - 1 ( z ) ) g ( g - 1 ( z ) ) - a + c 3 θ θ g ( θ g - 1 ( z ) ) g ( g - 1 ( z ) ) - g ( θ z ) g ( g - 1 ( z ) ) ( g ( g - 1 ( z ) ) ) 3 = θ g ( θ g - 1 ( z ) ) g ( g - 1 ( z ) ) g θ g - 1 ( g ( θ g - 1 ( z ) ) ) - ag ( θ g - 1 ( z ) ) = ( y ( y ( z ) ) - ay ( z ) ) y ( z )

as required. The proof is completed.  

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S009630030800920X