• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Questions for math experts (linear algebra)

hopbalt

New Blood
Joined
Jul 24, 2003
Messages
23
1. Can you have a dot product for two matrices that are NOT vectors and NOT identical dimensions? Everything I've seen suggests that only vector matrices with exactly the same dimensions can be used in a dot product.

If thats not true, how would find the dot product of [1, 2; 3, 4] with [5, 6; 7, 8] Note that semicolon denotes a new row, and commas denote a new column, so both of these are 2 x 2 matrices.



2. Is the trace of a matrix defined ONLY for square matrices? What about for scalars? I'm assuming the trace of 5 is just 5, or is it not defined?
 
hopbalt said:
1. Can you have a dot product for two matrices that are NOT vectors and NOT identical dimensions? Everything I've seen suggests that only vector matrices with exactly the same dimensions can be used in a dot product.

If thats not true, how would find the dot product of [1, 2; 3, 4] with [5, 6; 7, 8] Note that semicolon denotes a new row, and commas denote a new column, so both of these are 2 x 2 matrices.

2. Is the trace of a matrix defined ONLY for square matrices? What about for scalars? I'm assuming the trace of 5 is just 5, or is it not defined?


Hopbalt, great questions. A good resource is Mathworld. You can do a search for "dot product" and for "trace" and see what comes up.

I believe for 1., the answer is that a dot product is only for vectors, and I believe the trace can only be done on square matrices.
 
hopbalt,

1. Can you have a dot product for two matrices that are NOT vectors and NOT identical dimensions? Everything I've seen suggests that only vector matrices with exactly the same dimensions can be used in a dot product.

Oddly enough, the answer is both yes and no. The dot product is only defined for vectors, but they do not have to be what we normally think of as vectors.

In fact, any mathematical object with a specific set of properties can be considered a vector in a specific vector space. Those properties are:

1) X + Y must be a vector in the same vector space, for any vectors X and Y.

2) X + Y = Y + X.

3) aX must be a vector in the vector space for all scalars a and all vectors X.

4) A unique null vector 0 must exist such that X + 0 = X for all X.

5) For every vector X there must be a vector X' such that X + X' = 0.

6) a(X + Y) = aX + aY.

7) (a + b)X = aX + bX.

if these criteria are met, then you have a vector space, and you can define an inner product for the space. A dot-product is simply what we call an inner product when the vectors are expressed in terms of basis vectors which are orthogonal according to the inner product. In other words, if we have an N-dimensional vector space, and an inner-product defined for that space, then we can always find a set of N vectors such that the inner-product of any pair of them is zero. We can then right our vectors using the normal notation [x1, x2, ... xn], where each component refers to the corresponding basis vector. Using that notation, our inner product will look like an ordinary dot product, regardless of what the actual vectors are, or how our inner product is defined.

If thats not true, how would find the dot product of [1, 2; 3, 4] with [5, 6; 7, 8] Note that semicolon denotes a new row, and commas denote a new column, so both of these are 2 x 2 matrices.

Well, the set of 2x2 matrices qualifies as a 4-dimensional vector space. I can define an inner-product for that space as follows.

X * Y = X11 * Y11 + X12 * Y12 + X21 * Y21 + X22 * Y22

Using that inner product, the following matrices are orthogonal:

[1, 0; 0, 0], [0, 1; 0, 0], [0, 0; 1, 0], [0, 0; 0, 1]

I can then write your matrices in standard notation as [1, 2, 3, 4] and [5, 6, 7, 8]. The dot product is thus 70.

2. Is the trace of a matrix defined ONLY for square matrices? What about for scalars? I'm assuming the trace of 5 is just 5, or is it not defined?

The trace is only defined for square matrices. A scalar can be considered a 1 x 1 matrix, so I guess that works.


Dr. Stupid
 
Re: Re: Questions for math experts (linear algebra)

T'ai Chi said:


Hopbalt, great questions. A good resource is Mathworld. You can do a search for "dot product" and for "trace" and see what comes up.

I believe for 1., the answer is that a dot product is only for vectors, and I believe the trace can only be done on square matrices.
Man, that site would have been useful 2 days ago. I just spent two days debugging because my reference book was wrong.

Maybe they have a good reference for rotating a point about an arbitrary axis.
 
Ok I have another question...

suppose I have an equation that has a gradient in it.

For example, grad^2*x = grad^2*y

Now, can I cancel those 2 gradients out and get x = y?

I know its got to be more complicated than that. Is just setting the boundary conditions equal for both x and y enough to cancel the grads out, or is it not possible to cancel out gradients like that?
 
Also, can somebody tell me what an eigenvalue expansion is?

How do you use it to approximate/solve differential equations?
 
Re: Re: Re: Questions for math experts (linear algebra)

ManfredVonRichthoffen said:
Man, that site would have been useful 2 days ago. I just spent two days debugging because my reference book was wrong.

Maybe they have a good reference for rotating a point about an arbitrary axis.

You might try this link.
 
No I'm not doing homework.

I'm trying to understand a scientific paper. I attached it as a PDF file if you want to take a look.

http://ieeexplore.ieee.org/iel5/10/...swanathan,+R.R.;+Raghavan,+R.;+Gillies,+G.T.;


About half of it is experimental and about half is theoretical/math.

So far, I understand everything up to equation #4.11

Everything after that I'm lost on.

For example, they talk about using an eigenvalue expansion in 2 variables (r and t) yet they only define one boundary condition. For 2 variables, dont you have to have at least 2 boundaries?

Also, I have no clue where they got equation 4.12 from. I tried plugging their eigenfunction into equation 4.9 (which is what they claimed to do), but I didnt have any r^2 terms in the solution like they did. Also, if you take their Xk solution of 1/r cos kr + 1/r sin kr and differentiate it and plug it back into 4.12, it DOES NOT come out right.

Also I'm not sure why they really took this approach to begin with. It seems like a standard diff eq, why use an eigenfunction approach? Isnt that harder than the other methods?
 
hopbalt:

I took a brief peak at the paper, and it could be that what they're calling an "eigenvalue" expansion is what is usually [I think] called an "eigenmode" expansion--you construct a solution as an expansion of orthogonal functions (such as sines and cosines.) Usually this method is used when you have a differential equation with an infinite number of solutions, all of which can be expanded in terms of those functions.

As far as your boundary condition question is concerned, it looks like the initial condition u(r0,0)=0 suffices for a second boundary condition.

Usually with an eigenmode expansion you extract the coefficients using Fourier's Trick (taking the inner product of both sides of an equation to eliminate the summations over k), but in this case it looks like some of the calculations were suppressed, so if you're not familiar with this technique you might want to consult a math or physics textbook to see how it works.
 
hopbalt said:

Also, I have no clue where they got equation 4.12 from. I tried plugging their eigenfunction into equation 4.9 (which is what they claimed to do), but I didnt have any r^2 terms in the solution like they did.
Assuming I am looking at the correct r^2's, the r^2 terms come from writing the Laplacian in spherical coordinates and separating variables.

Laplacian = 1/r^2 d( r^2 d/dr )/dr + angular differentials
 
hopbalt said:
Also, if you take their Xk solution of 1/r cos kr + 1/r sin kr and differentiate it and plug it back into 4.12, it DOES NOT come out right.
That's right; that's why you have to do the expansion. The true solution of the diffy Q is not just 1/r cos kr + 1/r sin kr; it's a summation over all possible k of those functions with the coefficients to be determined (using the boundary conditions).

For example, the k=0 solution is just some constant times 1/r; this is what the paper calls "the zero eigenvalue solution." There could also be k=1 solutions, k=2 solutions, etc.
 
American said:
I take it you have an assignment due, and we're supposed to do it for you.

american, dont you mean "they're" supposed to do it for you? unless these equations are in "you're" league. (and i'm just kidding as they are way out of mine)
 

Back
Top Bottom