Subject: Constraint Optimization-Inequalities
by support1.mathforum.org (8.11.6/8.11.6/The Math Forum, $Revision:
1.9 primary) id iABK29g13626;
I have a problem consisting of N equations and M unknowns. I also have
inequalitiesthat the solution must obey. What I don't understand is:
1) How can I minimize (root finding) of all these equation as one
equation? (I was thinking about (f^2)i for i=1,N
2) Solving the problem using Lagrange Multipliers and KKT conditions,
after I will have set up the conditions, how I will be able to solve
the problem numerically? I will guess a set values at first and then
how I continue updating the solution while the calculated values would
obey the inequalities?
Any help is really appreciated
===
Subject: Re: Constraint Optimization-Inequalities
> I have a problem consisting of N equations and M unknowns. I also have
> inequalitiesthat the solution must obey. What I don't understand is:
> 1) How can I minimize (root finding) of all these equation as one
> equation? (I was thinking about (f^2)i for i=1,N
> 2) Solving the problem using Lagrange Multipliers and KKT conditions,
> after I will have set up the conditions, how I will be able to solve
> the problem numerically? I will guess a set values at first and then
> how I continue updating the solution while the calculated values would
> obey the inequalities?
> Any help is really appreciated
>
you left off an important piece of information: is N>M or NI have a problem consisting of N equations and M unknowns. I also have
>inequalitiesthat the solution must obey. What I don't understand is:
>1) How can I minimize (root finding) of all these equation as one
>equation? (I was thinking about (f^2)i for i=1,N
>2) Solving the problem using Lagrange Multipliers and KKT conditions,
>after I will have set up the conditions, how I will be able to solve
>the problem numerically? I will guess a set values at first and then
>how I continue updating the solution while the calculated values would
>obey the inequalities?
>Any help is really appreciated
>
don't reenvent the wheel. what you are imagining is a path following
technique
for solving the necessary optimality conditions while maintaining
feasibility.
yes, this works in some cases, but is cumbersome and expensive.
better look in an introductory text (you find some at the url below) and
even
better use some software ready for you:
http://plato.la.asu.edu/guide.html
under problems/software you find codes
under books you find a list of good books showing the theory on which the
software
is based. you will also find tutorials, interactive solvers .. and more
hth
peter
===
Subject: Re: 'inverse iteration' for complex eigenvalues
X-RFC2646: Format=Flowed; Original
>>hello,
>>i have been using the 'inverse algorithm' to find eigenvalues of a
>>real
>>matrix, but it seems to only converge satisfactorily for real
>>eigenvalues.
>>for complex eigenvalues it seems to give spurious and widely
>>converging
>>results.
>>the algorithm can be found here
>>http://www.cs.utk.edu/~dongarra/etemplates/node96.html
>>i'm sure i'm using the algorithm correctly for the complex case, and
>>correctly carrying out the complex number arithmetic, but i just
>>cannot
>>get
>>it to converge to the correct complex eigenvalue.
>>is it the case that this is only suited for real eigenvalues?
>> no. it must do also in the complex case. how did you do (5)?
>> hopefully as
>> theta=sum_{i=1 to n} conj(v(i))*y(i)
>> ?
>> hth
>> peter
>hi peter,
>works. why does the complex conjugate for 'v' have to be used in the
>expression for theta?
> you have an approxiamte eigenvector y which is normalized to euclidean
> length one.
> it is of course a complex vector.
> now, performing one step of inverse iteration, you get
> v = approximately 1/(lambda-mu)*y
> multiplying with the complex conjugate transpose of y you get
> sum_i conj(y(i))*v(i) = 1/(lambda-mu) sum_i conj(y(i))*y(i) =
> 1/(lambda-mu)
> you see: without the conj this would never hold
> hth
> peter
peter,
i've tried to use the complex conjugate of 'v' rather than 'v' itself in
step 5) , but its still not working. any suggestions?
===
Subject: Re: 'inverse iteration' for complex eigenvalues
as usual I made an error: (in the explanation below it is explained
correctly)
step (5) is theta=v*y.
you must use
theta=sum conj(y(i))*v(i)
since
v(i) = 1/(lambda-mu)*y(i) (approximately) and norm(y)=1
sorry
peter
it will converge of course only if your shift mu is near the true
complex lambda in the sense
|mu-lambda|< |mu - any other eigenvalue|
>>
>
>hello,
>
>i have been using the 'inverse algorithm' to find eigenvalues of a
>real
>matrix, but it seems to only converge satisfactorily for real
>eigenvalues.
>for complex eigenvalues it seems to give spurious and widely
>converging
>results.
>
>the algorithm can be found here
>http://www.cs.utk.edu/~dongarra/etemplates/node96.html
>
>i'm sure i'm using the algorithm correctly for the complex case,
and
>correctly carrying out the complex number arithmetic, but i just
>cannot
>get
>it to converge to the correct complex eigenvalue.
>
>is it the case that this is only suited for real eigenvalues?
>
>
>
>
> no. it must do also in the complex case. how did you do (5)?
> hopefully as
> theta=sum_{i=1 to n} conj(v(i))*y(i)
> ?
> hth
> peter
>>
>>hi peter,
>>works. why does the complex conjugate for 'v' have to be used in the
>>expression for theta?
>>
>>
>> you have an approxiamte eigenvector y which is normalized to euclidean
>> length one.
>> it is of course a complex vector.
>> now, performing one step of inverse iteration, you get
>> v = approximately 1/(lambda-mu)*y
>> multiplying with the complex conjugate transpose of y you get
>> sum_i conj(y(i))*v(i) = 1/(lambda-mu) sum_i conj(y(i))*y(i) =
>> 1/(lambda-mu)
>> you see: without the conj this would never hold
>> hth
>> peter
>peter,
>i've tried to use the complex conjugate of 'v' rather than 'v' itself in
>step 5) , but its still not working. any suggestions?
>>
===
Subject: Re: 'inverse iteration' for complex eigenvalues
X-RFC2646: Format=Flowed; Original
> as usual I made an error: (in the explanation below it is explained
> correctly)
> step (5) is theta=v*y.
> you must use
> theta=sum conj(y(i))*v(i)
> since
> v(i) = 1/(lambda-mu)*y(i) (approximately) and norm(y)=1
> sorry
> peter
> it will converge of course only if your shift mu is near the true
> complex lambda in the sense
> |mu-lambda|< |mu - any other eigenvalue|
>>in
hello,
i have been using the 'inverse algorithm' to find eigenvalues of a
>real
>matrix, but it seems to only converge satisfactorily for real
>eigenvalues.
>for complex eigenvalues it seems to give spurious and widely
>converging
>results.
the algorithm can be found here
>http://www.cs.utk.edu/~dongarra/etemplates/node96.html
i'm sure i'm using the algorithm correctly for the complex case,
>and
>correctly carrying out the complex number arithmetic, but i just
>cannot
>get
>it to converge to the correct complex eigenvalue.
is it the case that this is only suited for real eigenvalues?
> no. it must do also in the complex case. how did you do (5)?
> hopefully as
> theta=sum_{i=1 to n} conj(v(i))*y(i)
> ?
> hth
> peter
>>hi peter,
>>works. why does the complex conjugate for 'v' have to be used in the
>>expression for theta?
>> you have an approxiamte eigenvector y which is normalized to euclidean
>> length one.
>> it is of course a complex vector.
>> now, performing one step of inverse iteration, you get
>> v = approximately 1/(lambda-mu)*y
>> multiplying with the complex conjugate transpose of y you get
>> sum_i conj(y(i))*v(i) = 1/(lambda-mu) sum_i conj(y(i))*y(i) =
>> 1/(lambda-mu)
>> you see: without the conj this would never hold
>> hth
>> peter
>peter,
>i've tried to use the complex conjugate of 'v' rather than 'v' itself in
>step 5) , but its still not working. any suggestions?
>>
peter,
tried the conjugate of 'y' rather than the 'v' this time, in the expression
for theta, and its still not converging even after 500 iterations...! i'm
pulling my hair out here... any ideas?
jeremy
===
Subject: A=LU factorization
anyone could tell me where i can found the algorithm that implements
this factorization? A=LU
===
Subject: Re: A=LU factorization
X-RFC2646: Format=Flowed; Original
> anyone could tell me where i can found the algorithm that implements
> this factorization? A=LU
the crout method or the doolittle method are the main algorithms for the LU.
do a websearch. if you want to LU decompose a complex matrix then you are
better off with doolittle
===
Subject: Re: A=LU factorization
> Can anyone tell me where I can find the algorithm
> that implements an LU factorization? A = LU
Take a look at
The C++ Scalar, Vector, Matrix and Tensor class Library
http://www.netwood.net/~edwin/svmtl/
~/svmtl/examples> expand single.cc
/*
The C++ Scalar, Vector, Matrix and Tensor classes.
Copyright (C) 1998 E. Robert Tisdale
This file is part of The C++ Scalar, Vector, Matrix and Tensor classes.
This library is free software which you can redistribute and/or modify
under the terms of the GNU Library General Public License
as published by the Free Software Foundation;
either version 2, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty
of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of
the GNU Library General Public License along with this library.
If not, write to the Free Software Foundation,
Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
Written by E. Robert Tisdale
*/
f
#include
int
main(int argc, char* argv[]) {
using std::ios; using std::cout;
double a[] = { 1, 0, -1, 0,
1, 5, -3, 3,
2, 4, 1, 7,
3, 2, 0, 5};
doubleMatrix A = doubleSubArray2(a, 0, 4, 4, 4, 1);
cout << A =n << setw(3, 4) << A;
/*
A =
1 0 -1 0
1 5 -3 3
2 4 1 7
3 2 0 5
*/
doubleVector b(4, 0.0);
cout << b = << setw(3, 4) << b;
/*
b = 0 0 0 0
*/
offsetVector p = A.lud();
cout << p = << setw(3, 4) << p;
/*
p = 3 1 2 0
*/
cout.precision(2);
cout.setf(ios::fixed);
cout << LU =n << setw(6, 4) << A;
/*
LU =
3.00 2.00 0.00 5.00
0.33 4.33 -3.00 1.33
0.67 0.62 2.85 2.85
0.33 -0.15 -0.51 0.00
*/
doubleVector y = b.pl(p, A);
cout << y = << setw(6, 4) << y;
/*
y = 0.00 0.00 0.00 0.00
*/
doubleVector x = y.du(A);
cout << x = << setw(5, 4) << x;
/*
x = 0.00 0.00 0.00 0.00
*/
return 0;
}
===
Subject: Re: A=LU factorization
>anyone could tell me where i can found the algorithm that implements
>this factorization? A=LU
Just do a web search for LU decomposition.
Dan :-)
===
Subject: Re: Minimization of a function with V-shaped valleys.
> I need to do a non-linear minimization in N = 12 to 400 dimensions (as
> high as the algorithm will allow).
------------------------------*/
> endsas ;
David
===
Subject: Re: Minimization of a function with V-shaped valleys.
> >> I should also have added that I don't know the gradient. I get the
> >> objective function by root-finding in 1-D with Brent's method.
> > >> I only just realised that the performance of Brent's method (as seen
> >> in Numerical Recipes) can be very poor. I've always thought of it as
a
> >> stabilised quadratic fit, but it's really an accelerated binary
> >> search, and the acceleration factor is only 2.
> >Yes, I can do it. I have a new 1-D rootfinding algorithm and can't see
> >why it shouldn't have an asymptotic convergence rate better than any
> >in Alefield, Potra & Shi Algorithm 748.
> >The principle is simple. Any method for estimating the root in a
> >bounded interval, whether it be secant, quadratic, inverse quadratic,
> >cubic or other, will tend to approach the root from one side. Call
> >this method A. Develop a method of the same order using controlled
> >under- or overshoots to approach the root from the other side. Call
> >this method B. The algorithm takes one step of method A then B then A
> >then B etc. so that the bracket on the root comes in from a different
> >side on each iteration.
> >Shrink the amount of under- or overshoot of each iteration by the
> >amount that the bracket around the root shrinks, this uses the
> >shrinkage of the bracket accelerate the shrinkage of the bracket.
> >That's the 1% inspiration.
> >I tried it on a well behaved objective function and it works
> >beautifully, faster than an un-bracketed version. It should be
> >asymptotically faster than Alefield, Potra & Shi's algorithms (748)
> >because they use an constant overshoot lower-order approximation (a
> >doubled secant step) which should be a lot poorer than my variable
> >overshoot higher order approximation.
> >The remaining 99% (poorly behaved objective functions) is still to be
> >done.
> >Do you know of anything similar?
> the Illinois modification of the secant rule:
> (t(k),f(k)) the generated sequence and (a(k),b(k)) the bracketing of the
> root. initially a(0)=t(0) b(0)=t(1) and f(a(0))*f(b(0))<0.
> % while some termination criterion is not satisfied
> if f(k)*f(k-1)<0
> t(k+1)=t(k)-f(k)*(t(k)-t(k-1))/(f(k)-f(k-1));
> elseif f(k)*f(k-1)>0 and f(k)*f(k-2)<0
> t(k+1)=t(k)-f(k)*(t(k)-t(k-2))/(f(k)-f(k-2)/2);
> else
> t(k+1)=(a(k)+b(k))/2;
> end
> f(k+1)=f(t(k+1));
> if f(k+1) == 0
> sol=t(k+1);
> return;
> end
>
> if f(a(k))*f(k+1) < 0
> b(k+1)=t(k+1);
> a(k+1)=a(k);
> else
> a(k+1)=t(k+1);
> b(k+1)=b(k);
> end
> % observe that you need not recompute f(a(k)) and f(b(k))
> hth
> peter
> else
> t(k+1)=(a(k)+b(k))/2;
Binary search as the solution is approached? It doesn't look
promising.
> elseif f(k)*f(k-1)>0 and f(k)*f(k-2)<0
> t(k+1)=t(k)-f(k)*(t(k)-t(k-2))/(f(k)-f(k-2)/2);
Half length secant step. Not as good as a variable undershoot secant
step. Better to replace the 1/2 by a weight w and have w tending to 1
as the bracket size tends to zero.
===
Subject: Re: Minimization of a function with V-shaped valleys.
>> >> I should also have added that I don't know the gradient. I get the
>> >> objective function by root-finding in 1-D with Brent's method.
>> >>
>> >> I only just realised that the performance of Brent's method (as
seen
>> >> in Numerical Recipes) can be very poor. I've always thought of it as
a
>> >> stabilised quadratic fit, but it's really an accelerated binary
>> >> search, and the acceleration factor is only 2.
>> >
>> >Yes, I can do it. I have a new 1-D rootfinding algorithm and can't
see
>> >why it shouldn't have an asymptotic convergence rate better than any
>> >in Alefield, Potra & Shi Algorithm 748.
>> >
>> >The principle is simple. Any method for estimating the root in a
>> >bounded interval, whether it be secant, quadratic, inverse quadratic,
>> >cubic or other, will tend to approach the root from one side. Call
>> >this method A. Develop a method of the same order using controlled
>> >under- or overshoots to approach the root from the other side. Call
>> >this method B. The algorithm takes one step of method A then B then A
>> >then B etc. so that the bracket on the root comes in from a different
>> >side on each iteration.
>> >
>> >Shrink the amount of under- or overshoot of each iteration by the
>> >amount that the bracket around the root shrinks, this uses the
>> >shrinkage of the bracket accelerate the shrinkage of the bracket.
>> >
>> >That's the 1% inspiration.
>> >
>> >I tried it on a well behaved objective function and it works
>> >beautifully, faster than an un-bracketed version. It should be
>> >asymptotically faster than Alefield, Potra & Shi's algorithms (748)
>> >because they use an constant overshoot lower-order approximation (a
>> >doubled secant step) which should be a lot poorer than my variable
>> >overshoot higher order approximation.
>> >
>> >The remaining 99% (poorly behaved objective functions) is still to be
>> >done.
>> >
>> >Do you know of anything similar?
>>
>> the Illinois modification of the secant rule:
>> (t(k),f(k)) the generated sequence and (a(k),b(k)) the bracketing of the
>> root. initially a(0)=t(0) b(0)=t(1) and f(a(0))*f(b(0))<0.
>>
>>
>> % while some termination criterion is not satisfied
>> if f(k)*f(k-1)<0
>> t(k+1)=t(k)-f(k)*(t(k)-t(k-1))/(f(k)-f(k-1));
>> elseif f(k)*f(k-1)>0 and f(k)*f(k-2)<0
>> t(k+1)=t(k)-f(k)*(t(k)-t(k-2))/(f(k)-f(k-2)/2);
>> else
>> t(k+1)=(a(k)+b(k))/2;
>> end
>> f(k+1)=f(t(k+1));
>> if f(k+1) == 0
> sol=t(k+1);
>> return;
>> end
>>
>> if f(a(k))*f(k+1) < 0
>> b(k+1)=t(k+1);
>> a(k+1)=a(k);
>> else
>> a(k+1)=t(k+1);
>> b(k+1)=b(k);
>> end
>> % observe that you need not recompute f(a(k)) and f(b(k))
>>
>> hth
>> peter
>> else
>> t(k+1)=(a(k)+b(k))/2;
>Binary search as the solution is approached? It doesn't look
>promising.
no! this is kind of emergency exit if sign changes do not appear even
after a
modified step
>> elseif f(k)*f(k-1)>0 and f(k)*f(k-2)<0
>> t(k+1)=t(k)-f(k)*(t(k)-t(k-2))/(f(k)-f(k-2)/2);
>Half length secant step. Not as good as a variable undershoot secant
>step. Better to replace the 1/2 by a weight w and have w tending to 1
>as the bracket size tends to zero.
one can show thta finally every third step is a modified secant step the
others
being normal secant steps and convergence is fastt, but of course with an
adaptive modification it should be even better
hth
peter
===
Subject: romberg integration
I have the integral:
Integral( sqrt( 1 + cos(x)^2 ), x, 0, 48)
I solved with my TI89 and got an answer, but I'm trying to do it by Romberg
integration algorithm and I am off by more than a little. Would this
integral cause problems with romberg integration? Why?
===
Subject: Re: romberg integration
> I have the integral:
> Integral( sqrt( 1 + cos(x)^2 ), x, 0, 48)
> I solved with my TI89 and got an answer, but I'm trying to do it by
Romberg
> integration algorithm and I am off by more than a little. Would this
> integral cause problems with romberg integration? Why?
apart to Dave's answer: Are you sure that you used the proper units (rad
vs. degrees)?
Alois
===
Subject: Re: romberg integration
> I have the integral:
> Integral( sqrt( 1 + cos(x)^2 ), x, 0, 48)
> I solved with my TI89 and got an answer, but I'm trying to do it by
Romberg
> integration algorithm and I am off by more than a little. Would this
> integral cause problems with romberg integration? Why?
> apart to Dave's answer: Are you sure that you used the proper units (rad
> vs. degrees)?
> Alois
Yes, I graphed it and say how oscillatory it is too. And sure enough as I
did more subdivisions my answer matched the TI89.
===
Subject: Re: romberg integration
> I have the integral:
> Integral( sqrt( 1 + cos(x)^2 ), x, 0, 48)
> I solved with my TI89 and got an answer, but I'm trying to do it by
Romberg
> integration algorithm and I am off by more than a little. Would this
> integral cause problems with romberg integration? Why?
The function is highly oscillatory. If you use a small number of
intervals,
the estimate is going to be erratic.
--
Dave Seaman
Judge Yohn's mistakes revealed in Mumia Abu-Jamal ruling.
===
Subject: How to determine numerically what matrix is indefinite?
by support1.mathforum.org (8.11.6/8.11.6/The Math Forum, $Revision:
1.9 primary) id iAC6YqQ32322;
numerically what matrix is indefinite?
I'm now study the algorithm SYMMLQ. It's said that this algorithm
is very good for solving
Ax=b
when the real symmetric matrix A is lage and sparse, even A is
indefinite. So how to determine the type of A?
Is it a good way to calculate the maximum and minimum
eigenvalues?
Xiaoqian Wu
Shanghai University, PR. China
===
Subject: Re: How to determine numerically what matrix is indefinite?
>numerically what matrix is indefinite?
> I'm now study the algorithm SYMMLQ. It's said that this algorithm
>is very good for solving
> Ax=b
>when the real symmetric matrix A is lage and sparse, even A is
>indefinite. So how to determine the type of A?
> Is it a good way to calculate the maximum and minimum
>eigenvalues?
no. the term applies to symmetric matrices only. hence use cholesky with
diagonal pivoting. if you cannot find a strictly positive pivot at step k,
while the right lower submatrix of dimension n-k+1 has a zero diagonal
but
nonzero offdiagonal elements of a negative element on the diagonal, then it
is
indefinite. or: appy the bunch parlett decomposition with pivoting.
then if there a negative and a positive 1 by 1 pivot or an
indefinite 2 by 2 pivot occurs, then it is indefinite
eigenvalue computation fro both ends of the spectrum is much too expensive
hth
peter
===
Subject: Best linear iterative solver.
I looking for a linear iterative solver over the net but I am
confusing for the amount of techiques... I work with a tipical Finite
Element system of linear equations (symmetric, banded, diagonal
dominant) usualy positive definite (if this is not true the problem is
invalid, aka, the structure is not well defined).
What is the best iterative solutor for this kind of problem?
===
Subject: Re: Best linear iterative solver.
> I looking for a linear iterative solver over the net but I am
> confusing for the amount of techiques... I work with a tipical Finite
> Element system of linear equations (symmetric, banded, diagonal
> dominant) usualy positive definite (if this is not true the problem is
> invalid, aka, the structure is not well defined).
> What is the best iterative solutor for this kind of problem?
For a symmetric positive definite linear system corresponding FE the
preconditioned conjugated gradient method works fine. The only problem
is that the number of necessary iterations depends highly on the
condition number of the matrix A which grow when the mesh is refined.
The number of necessary iterations can be bounded asymptotically by:
There exists a constant C>0 such that
i_max le C cond(A))^{1/2+delta} where 0I looking for a linear iterative solver over the net but I am
>confusing for the amount of techiques... I work with a tipical Finite
>Element system of linear equations (symmetric, banded, diagonal
>dominant) usualy positive definite (if this is not true the problem is
>invalid, aka, the structure is not well defined).
>What is the best iterative solutor for this kind of problem?
if the grid has no quite regular structure, then conjugate gradients plus
preconditioning e.g. by making domain decomposition, giving the coupling
nodes
the highest enumeration and using the block diagonal part of the stiffness
matrix
as preconditioner. otherwise multigrid.
hth
peter
===
Subject: Re: Best linear iterative solver.
X-RFC2646: Format=Flowed; Original
>I looking for a linear iterative solver over the net but I am
> confusing for the amount of techiques... I work with a tipical Finite
> Element system of linear equations (symmetric, banded, diagonal
> dominant) usualy positive definite (if this is not true the problem is
> invalid, aka, the structure is not well defined).
> What is the best iterative solutor for this kind of problem?
iterative methods for systems of linear equations? gauss-seidel and jacobi
are the main ones
===
Subject: Re: Best linear iterative solver.
> gauss-seidel and jacobi
> are the main ones
No they're not. The only work on conditions that are basically
equivalent to linear elements, but even then a preconditioned conjugate
gradient is an order faster. Do multigrid and you might have an optimal
algorithm.
V.
--
email: lastname at cs utk edu
homepage: www cs utk edu tilde lastname
===
Subject: Re: collocation method in finite elements PDE
> hi,
> collocation finite element method. Can someone tells me the pro & con
of
> this method vs the Galerkin finite elements method?
> TF
Problematic. And they cannot always distinguished properly, since there
do even exist mixtures between the two. The following provides a lucid
introduction to these matters (as well):
Han de Bruijn
===
Subject: Re: collocation method in finite elements PDE
seems that nobody likes this question and I think it is not easy
to answer.
Both methods can be viewed in variational context. Then, Galerkin
means that test and ansatz spaces are equal
whereas collocation means that the test function are delta
distributions. Thus, one gets a different regularity theory (more
difficult for collocation, I think).
An advantage of the collocation, especially the p-version is that it is
easy to implement since one does not need to calculate integrals (cf
Spectral methods in matlab, Trefethen). This
becomes even more important for boundary integral equations (cf. Kress:
Numerical analysis; Integral equations).
I would like to know more about comparisons of convergence rates etc.
> hi,
> collocation finite element method. Can someone tells me the pro & con
of
> this method vs the Galerkin finite elements method?
> TF
--
!----------------------------------------------------------+
! Andreas Krebs
! Lehrstuhl fuer Numerische und Angewandte Mathematik
! Institut fuer Mathematik / Fakultaet I
! Raum 206 / Lehrgebaeude 10
! email: lastname at org.domain
! org: math.tu-cottbus domain: de
! Tel.: (+49 355) 69-3067
! Fax : (+49 355) 69-2776
! PF 101344, D-03013 COTTBUS, Germany
!
! URL: http://vieta.math.tu-cottbus.de/~krebs/
!----------------------------------------------------------+
===
Subject: A C++ question BLAS
X-RFC2646: Format=Flowed; Original
excuse me for this irrelevant question but I do think that someone here,
who works on numerical analysis, has already solved this problem: how can I
use BLAS on MS Visual C++? I can download BLAS source code, I have Intel
there I find no reply to this question ( but for Linux distro Yes!).
TF
===
Subject: any good fortran package to solve large scale nonlinear system of
equations?
I need to solve a nonlinear system of about 5000 equations, which is
ill-conditioned. I tried the solvers at IMSL but always failed.
Especially I can't provide a close enough initial-guess in advance.
Any recommendations in this case? Packages can be used with F90/95 is
preferred.
Yiyu
===
Subject: Re: any good fortran package to solve large scale nonlinear system
of equations?
>I need to solve a nonlinear system of about 5000 equations, which is
>ill-conditioned. I tried the solvers at IMSL but always failed.
>Especially I can't provide a close enough initial-guess in advance.
>Any recommendations in this case? Packages can be used with F90/95 is
>preferred.
The first place I go for Fortran 90/95 code is Alan Miller's site, and at
http://users.bigpond.net.au/amiller/NSWC/hbrd.f90 there is a code for
solving
sets of nonlinear equations. If the problem is ill-conditioned, maybe using
quadruple precision can help, and the hbrd.f90 can be easily modified to
do that.
Can you try using a global optimization algorithm algorithm on the sum of
squared deviations of the equations from zero?
Can you solve a subset of the equations to get an initial guess?
Most fundamentally, how do you know there *is* a solution?
I have only worked with small systems of nonlinear equations, so please
discount
my advice appropriately.
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000
Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption
=---
===
Subject: Re: any good fortran package to solve large scale nonlinear system
of equations?
>I need to solve a nonlinear system of about 5000 equations, which is
>ill-conditioned. I tried the solvers at IMSL but always failed.
>Especially I can't provide a close enough initial-guess in advance.
>Any recommendations in this case? Packages can be used with F90/95 is
>preferred.
> The first place I go for Fortran 90/95 code is Alan Miller's site, and at
> http://users.bigpond.net.au/amiller/NSWC/hbrd.f90 there is a code for
solving
> sets of nonlinear equations. If the problem is ill-conditioned, maybe
using
> quadruple precision can help, and the hbrd.f90 can be easily modified to
> do that.
> Can you try using a global optimization algorithm algorithm on the sum of
> squared deviations of the equations from zero?
> Can you solve a subset of the equations to get an initial guess?
> Most fundamentally, how do you know there *is* a solution?
> I have only worked with small systems of nonlinear equations, so please
discount
> my advice appropriately.
Yes, I agree I'm not sure if a solution exists, though I suspect there
is one. Is there any way I can find out?
===
Subject: Re: any good fortran package to solve large scale nonlinear system
of equations?
>I need to solve a nonlinear system of about 5000 equations, which is
>ill-conditioned. I tried the solvers at IMSL but always failed.
>Especially I can't provide a close enough initial-guess in advance.
>Any recommendations in this case? Packages can be used with F90/95 is
>preferred.
>Yiyu
http://plato.la.asu.edu/topics/problems/zero.html -> nleq1s or nleq2
hth
peter
===
Subject: Root Finder v.
Due to the deluge of feedback on my mistakes, I have made
corrections to earlier Root Finders, now in its 5th edition. I
would like to thank all of those who have taken the time to
point out my errors, in order to arrive at perhaps not a
perfect, but possibly a more reasonably correct result. Please
keep your comments flowing in, for better future editions of
Root Finder.
Root Finder v.
by Jon Giffen
T*N+a[0]=0 is the nth degree polynomial since
(t,t2,t3,..,t^n)*(a[1],a[2],a[3],..,a[n])+a[0]=0
Following T up from the origin, R is orthogonal to T-R
R is parallel to the normal N.
R*(T-R)=0
R'*(T-R)+R*(T'-R')=0
2R'*R = R'*T + R*T'
((T'*N)/|N|)N/|N|=R'
Q*(T'-(T'*N)N/|N|^2) = 0
Q is the shortest vector from the origin to the plane.
T = (t,t2,t3,...,t^n)
T'= (1,2t,3t2,..,nt^(n-1) )
T'*Q =q[1]+2q[2]t+3q[3]t2+....+ nq[n]t^(n-1)
=(1/t)T*(q[1],2q[2],3q[3],...,nq[n])
T'*N = a[1]+2a[2]t+3a[3]t2+....+ na[n]t^(n-1)
=(1/t)T*(a[1],2a[2],3a[3],...,na[n])
Q = (T*N)N/|N|^2 = -a[0]N/|N|^2
D=( 1 , 2 , 3 ,...,n )
S=(a[1],2a[2],3a[3],...,na[n])
U=( 1 , 1 , 1 ,...,1 )
T'* U =(1/t)T*D
T'* N =(1/t)T*S
dividing,
(T'*U)(T*S) - (T'*N)(T*D) = 0
T*((T'*U)S - (T'*N)D) = 0
so T is orthogonal with (T'*U)S - (T'*N)D
or T is orthogonal with (1/t)((T*D)S - (T*S)D)
or T is orthogonal with (T*D)S - (T*S)D
or T is orthogonal with the plane defined by S and D
Let P = any point not on the plane defined by S and D
If V describes the point on the plane that describes
the shortest distance to the plane, then
(P-V)*D = (P-V)*S = 0
where V = mD - pS and m,p are found by substituting
in the above. Then the normal W = P-V and
W T
--- = --- the direction of W and T are the same
|W| |T|
But W intersects the plane (T-Q)*N=0 at T,
(rW-Q)*N = 0 solve for r.
Finally,
**************
T = rW ***solution***
**************
Where
N*Q
r = -----
N*W
Q = -a[0]N/|N|^2
W = P-V
and V is obtained,
(P-mD - pS)*D = 0
P*D - mD*D - pS*D = 0
P*S - mD*S - pS*S = 0
P*D-pS*D P*S - pS*S
m = ----------- = -------------
D*D D*S
(D*S)(P*D)-p(S*D)(D*S)=(P*S)(D*D)-p(S*S)(D*D)
(D*S)(P*D)-(P*S)(D*D)
p = ----------------------
(S*D)(D*S)-(S*S)(D*D)
1 (D*S)(P*D)-(P*S)(D*D)
m = (---)(P*D - ----------------------)
D*D (S*D)(D*S)-(S*S)(D*D)
using this m and p to produce V by,
V = mD - pS
and P in the above is any point *not* on the plane
defined by D and S, or P such that
S*P ± D*P D*S
arccos------ + arccos-------- != arccos ------
|S||P| |D||P| |D||S|
where
U=( 1 , 1 , 1 ,...,1 )
D=( 1 , 2 , 3 ,...,n )
S=(a[1],2a[2],3a[3],...,na[n])
arrived at from
T =(t, t2, t3, t4,...,t^n) and
T'=(1,2t ,3t2,4t3,..,nt^(n-1) ) where
(1/t)T=(1,t,t2,t3,...,t^n ) and
T'*U = (1/t)(T*D)
T'*N = (1/t)(T*S)
.
.
.
where
a[0]+a[1]t+a[2]t2+a[3]t3+ ... + a[n]t^n = 0
Notice that U could have been another vector of
equal utility, but U=(1,1,1,..,1) is perhaps the
simplest. Also, if a[1]=a[2]=a[3]=..=a[n], then
D and S are parallel and another means has to be
applied to arrive at W.
Jon Giffen
===
Subject: 3d table interpolation, with known gradients
Folks,
Suppose I have, on a regular 3D grid, a function's values and also the 3
gradient
bicubic splines don't know what to do with the gradients. I'm hoping that
knowing the
gradients on my grid will give me a faster and more accurate interpolation.
--
o__ | Paul Probert
,>/'_ | Associate Scientist
(_)(_) | The University of Wisconsin-Madison
| Dept. of Electrical and Computer Engineering
| B426 Engineering Hall
| 1415 Engineering Dr.
| Madison, WI 53706
===
Subject: problems related to sequential quadratic programming (SQP)
I have been trying to solve a problem with a highly nonlinear
objective function and several constraints, both linear and nonlinear.
Some of the linear constraints are simple bound constraints, which I
have implemented with the square of a slack variable or in the case of
non-negativity constraints by using the square of a variable.
My strategy has been to use the method of multipliers method (which is
linearly convergent) for a few iterations in order to get close to the
solution. I then switch to Newton's method, in which the variables
and lagrange multipliers are updated, using a linearization of the
hessian and constraint functions.
This procedure works sometimes, but not always. The main problem that
I have is that the hessian of the lagrangian is not positive-definite.
I try to get around this by employing an augmented lagrangian,
consisting of the original lagrangian plus a multiple of the sum of
the squares of the constraint functions. This does not always work
properly, as the resulting hessian is sometimes either ill-conditioned
or still not positive-definite. I try to implement matrix inversion
by employing the modified Cholesky decomposition, where I substitute a
positive quantity on the diagonal whenever the computed value of this
quantity is negative.
A standard procedure for this type of problem is to use the BFGS
approach, but I know all of the second-order partials, and hate to
throw away information.
These are the questions for the board:
1) Is there a way to deal with the non-positive definiteness problem
without BFGS? My approach should work, but so far has not.
2) In using a quadratic penalty function for the augmented
lagrangian, are there suggestions for the size of the constant
multiplying the square of the L-2 norm of the constraint function
vector?
3) In my procedure I use line search for the multipliers step, but
just try to take the full Newton step when the code reaches that step.
Would it be recommended to continue using line search here as well?
Your assistance will be much appreciated!
===
Subject: Re: Help with DE using Reduction in Order.
> I have solved several homework assignments similar to:
> Y + 2Y' = 0 with Y1 = X
> But I have been given the following:
> X^2*Y + X*Y' - Y = 1/X with Y1 = X
> and I am unsure how to tackle it since linear EQ is not equal to zero.
> Can anyone pass along some wisdom as I am stressing over this due this
> week, and while reviewing the instructors notes as well as the book I
> see no example to fall back on.
let L[y] = x^2y + xy' -y
note L[x] = 0
what is L[v(x)*x]?
can you solve L[v*x] = 1/x??
===
Subject: PDE with boundary condition at infinity
I'm currently concerned with the solution of a reaction-diffusion type
equation in spherical coordinates, i.e.,
dn(r,t)/dt = -w(r) n(r,t) + D/r d/dr^2( r n(r,t) ),
with the initial condition n(r,0) = 1 and the boundary condition
d(n(0,t)/dt = 0 and n(infinity) = 1.
So far I have been using a finite difference scheme on a
non-equidistant grid and assumed a large value for rmax at the outer
boundary. Although w(r) is a fast decaying function of r, this
procedure requires excessively large values for rmax to correctly
describe the outer boundary. I hope that the asymptotic solution for
w(r)=0 could be used to find a better representation of the boundary
condition. I would be grateful if a dedicated person that is more
involved with solving pdes than I am would propose a procedure.
Johan
===
Subject: Re: PDE with boundary condition at infinity
> procedure requires excessively large values for rmax to correctly
> describe the outer boundary. I hope that the asymptotic solution for
If you can't afford to make the grid sufficiently large through
brute force memory consumption, then you need to employ an
absorbing boundary condition.
===
Subject: Re: Can anyone offer any recommendations/ advice ?
Here's a tip. The Top Notch universities are no better for most
of us than a good lesser institution. Many of the best engineers Ive
met are from the latter class. Any state college is just fine for all
but the lifetime student.
Dave
>Can anyone offer any recommendations/ advice ?
>My sister who lives in Northern California has a 12 year old child
>who is not doing so well in her grades especially Math.
> What kind of grades qualify as not doing so well?
>What after school tuition / courses can she take ?
> The last thing most kids want to do after school each day is go back to
school.
> Kids really do need time to be kids. They'll have the rest of their
life
to
> work their fingers to the bone. In my opinion there are usually other,
> better ways to help a child with their learning skills.
>I have heard of SCORE, Sylvan etc.
>We would like her to get back getting A's in her grades and get into a
>Top Notch University.
> 1) That's often a lot to expect A's out of a child.
> 2) The actual grade isn't nearly as important as the knowledge aquired
> Put another way... some students make A's and learn squat.
> Somce students make B's and C's and retain/understand a LOT.
> 3) She's not even in High School yet. You may be thinking a bit ahead of
the
> game.
> Dan :-)
>Can anyone pls help. Any suggestions / pointers ?
>Has anyone had any great experiences / successes that they can share ?
>Lisa
===
Subject: Re: Can anyone offer any recommendations/ advice ?
>Here's a tip. The Top Notch universities are no better for most
>of us than a good lesser institution. Many of the best engineers Ive
>met are from the latter class. Any state college is just fine for all
>but the lifetime student.
I think places like MIT and Cal Tech demand more from their engineering
students
than most other schools. If my son is good enough to get accepted to those
places, I will be happy to fork over the tuition. I attended an elite Ivy
undergrad school and a good state school for graduate work, and when I was
a teaching assistant I noticed a significant difference in intellectual
caliber
between undergrad students at the two schools. Of course, there are some
very bright kids at the flagship state schools who want to save money.
In grad school, there was a tendency for foreign students (usually very
well
prepared) and Americans from the Harvard's and MIT's to pass the qualifying
exam immediately and be exempted from the introductory grad level courses.
Finishing graduate school a year early is worth a lot in extra earnings,
probably enough to offset the higher costs of private universities. At the
undergrad level, I think it is more common for students at public schools
to spend 5 or more years to finish their degree than at private schools --
the parents won't stand for it in the latter case.
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000
Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption
=---
===
Subject: Re: Can anyone offer any recommendations/ advice ?
> 2) The actual grade isn't nearly as important as the knowledge aquired
> Put another way... some students make A's and learn squat.
> Somce students make B's and C's and retain/understand a LOT.
In principle you're right, *however*, I'd like to see you argue to the
admissions office: Hey, fellas my grades are buried in double Ds, but I
learned a lot!
===
Subject: boundqary conditions for finite element equations
I need to incorporate properly a boundary condition for Galerkine
finite element system as aplied to
the my initial-boundary value problem:
function f(x,t), x in [0,1]
df/dt = d/dx a(x) d/dx f + d/dx b(x) f
BCs: 1) f(x=0)= 0, 2) df/dx (x=1)=0
initial value: f(x,0)=0.
The problem is with the essential (i.e. second) boundary condition,
because in my case
a(1)=0 and b(1)=0, i.e. flux
J(f)=a(x) d/dx f + b(x) f
is zero on the right boundary.
The problem is that if you look at the a weak form of my equation
d/dt int dx w(x) f = - int dx (d/dx w) J(f) +
wJ(f)Ï_{boundaries}
then at the bounbdary x=1 the boundary term vanishes due to values of
functions a(x) and b(x),
what does not allow me to take into account the boundary condition
2).
Any idea how to handle this case?
I thought about some kind of trasformation that
would bring to nondegerate flux on the boundaries, but failed to find
anything
helpful. Or may be I should use some special version of FE approach?
Would be grateful for any help.
Vladimir
===
Subject: Mathematica digit handling query
X-RFC2646: Format=Flowed; Original
Does anyone know of any easy way to make mathematica reverse the digits in a
list of numbers. For example, suppose that I used
Table[n^3,{n,1,5}] to generate {1, 8, 27, 64, 125}
Then I need to be able to operate on this list to return
{1,8,72,46,521}
Any pointers would be gratefully accepted
Tony
===
Subject: Re: Mathematica digit handling query
> Does anyone know of any easy way to make mathematica reverse the digits in
a
> list of numbers. For example, suppose that I used
> Table[n^3,{n,1,5}] to generate {1, 8, 27, 64, 125}
> Then I need to be able to operate on this list to return
> {1,8,72,46,521}
> Any pointers would be gratefully accepted
> Tony
In[1]:=
In[2]:=
rev /@ (Range[5]^3)
Out[2]=
{1, 8, 72, 46, 521}
--
Peter Pein
Berlin
===
Subject: Re: matlab to MMA, need some help converting this code.
by support1.mathforum.org (8.11.6/8.11.6/The Math Forum, $Revision:
1.9 primary) id iAC6Yp932282;
No takers?
I really could use soem startin goff point. I don't much about matlab.
Can someone suggests some ways to convert this code?
sean
>does anyone know both mathematica AND matlab?
>if so can any of you help me out with this?
>i have received some codes written in matlab, and i would like to
>convert it to mma.
>anyone up for the challenge *and* have the time and energy? It will
be
>nice to see line by line explanation of the conversion.
>following is the code which I'm having problems understanding.
>sean
>-------
>function result= gillespie(tf, v0)
>% function result= gillespie(tf, v0)
>% runs the Gillespie algorithm for constitutive expression up to
>% a time tf. the value of v0 may be specified but has a default value
>% of 0.01
>if nargin == 1
> v0= 0.01;
>end;
>% initial amounts
>D= 1;
>M= 0;
>N= 0;
>t= 0;
>% initialize other variables
>j= 2;
>nr= 4; % number of reactions
>% create some large result matrix (this is not essential but
>% speeds up the program)
>result= zeros(100000, 4);
>% store data for first time point
>result(1,:)= [t D M N];
>while t < tf
>
> % conversion
> s(1)= D;
> s(2)= M;
> s(3)= N;
>
> % store data
> result(j,:)= [t D M N];
> j= j+1;
>
> % calculate propensities
> a= calpropensities(s, v0);
> a0= sum(a);
>
> % generate two random numbers
> r= rand(1,2);
>
> % calculate time of next reaction
> dt= log(1/r(1))/a0;
>
> % calculate which reaction is next
> for mu= 1:nr
> if sum(a(1:mu)) > r(2)*a0
> break;
> end;
> end;
>
> % carry out reaction
> t= t+dt;
> if mu == 1
> M= M+1;
> elseif mu == 2
> M= M-1;
> elseif mu == 3
> N= N+1;
> elseif mu == 4
> N= N-1;
> end;
>
>end;
>% store data for last time point
>result(j,:)= [t D M N];
>% remove any excess zeros from result
>result(j+1:end,:)= [];
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>function a= calpropensities(s, v0)
>% rates
>v1= 0.04;
>d0= log(2)/180;
>d1= log(2)/3600;
>% conversion
>D= s(1);
>M= s(2);
>N= s(3);
>% propensities
>a(1)= v0*D;
>a(2)= d0*M;
>a(3)= v1*M;
>a(4)= d1*N;
===
Subject: Re: matlab to MMA, need some help converting this code.
> No takers?
> I really could use soem startin goff point. I don't much about matlab.
Well, looking at the code, there is nothing hard about it, all the
functions used by matlab code below do exist in Mathematica. Just do
If you want someone to convert the code and test the conversion,
then that will take sometime (few hours).
Why not try to do it yourself, and if you have specific question or
problem with something, then ask.
> Can someone suggests some ways to convert this code?
> sean
>does anyone know both mathematica AND matlab?
>if so can any of you help me out with this?
>i have received some codes written in matlab, and i would like to
>convert it to mma.
>anyone up for the challenge *and* have the time and energy? It will
> be
>nice to see line by line explanation of the conversion.
>following is the code which I'm having problems understanding.
>sean
>-------
>function result= gillespie(tf, v0)
>% function result= gillespie(tf, v0)
>%
>% runs the Gillespie algorithm for constitutive expression up to
>% a time tf. the value of v0 may be specified but has a default value
>% of 0.01
>if nargin == 1
> v0= 0.01;
>end;
>% initial amounts
>D= 1;
>M= 0;
>N= 0;
>t= 0;
>% initialize other variables
>j= 2;
>nr= 4; % number of reactions
>% create some large result matrix (this is not essential but
>% speeds up the program)
>result= zeros(100000, 4);
>% store data for first time point
>result(1,:)= [t D M N];
>while t < tf
>
> % conversion
> s(1)= D;
> s(2)= M;
> s(3)= N;
>
> % store data
> result(j,:)= [t D M N];
> j= j+1;
>
> % calculate propensities
> a= calpropensities(s, v0);
> a0= sum(a);
>
> % generate two random numbers
> r= rand(1,2);
>
> % calculate time of next reaction
> dt= log(1/r(1))/a0;
>
> % calculate which reaction is next
> for mu= 1:nr
> if sum(a(1:mu)) > r(2)*a0
> break;
> end;
> end;
>
> % carry out reaction
> t= t+dt;
> if mu == 1
> M= M+1;
> elseif mu == 2
> M= M-1;
> elseif mu == 3
> N= N+1;
> elseif mu == 4
> N= N-1;
> end;
>
>end;
>% store data for last time point
>result(j,:)= [t D M N];
>% remove any excess zeros from result
>result(j+1:end,:)= [];
>%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>function a= calpropensities(s, v0)
>% rates
>v1= 0.04;
>d0= log(2)/180;
>d1= log(2)/3600;
>% conversion
>D= s(1);
>M= s(2);
>N= s(3);
>% propensities
>a(1)= v0*D;
>a(2)= d0*M;
>a(3)= v1*M;
>a(4)= d1*N;
===
Subject: Re: MuPad 3.0 extraction of individual eigenvectors
Hi John, Ralf, Torsten,
I have read your replies with a great deal of interest.
The answers I received have given me a very clear understanding of how to
use op, map and indexing to do just about anything I need in the future to
extract arguments from the output of one command to use as input into
another.
In fact, I am reasonably confident that I will not need to ask a question
of
this nature again (let's not get too carried away here, but I am fairly
sure).
John's idea of point and click to extract arguments has merit. Of course
you
still want procedures you can embed in code that you may need to write. In
that case, the aforementioned use of op, map and indexing is still
required.
This leads me to thinking a How To document with several examples would be
very useful. It is often the case that a user of MuPad does not want to
simply take output from a command to MuPad, but the user often wants do
something with it i.e. feed it into another command.
I don't think it has to be a particularly lengthy document, but it would be
good to have a set of examples graded from the pretty standard things
people
want to do and also include a few of the more esoteric. I suspect most of
the required techniques are sprinkled in amongst the existing
documentation,
but to have them pulled together in one place for the specific purpose of
showing by example what to do may be useful. For example, I was unaware of
op@op which is a nice little nick nack described in this current set of
newsgroup replies.
believe I would probably be OK in the future without the How To
document,
but it may be a nice resource for the larger MuPad community - particularly
new users. I wonder if other people agree.
Brad
PS I haven't sent an official suggestion to the MuPad developers yet, but I
thought to see if anything relevant comes from other users first.
===
Subject: Re: MuPad 3.0 extraction of individual eigenvectors
X-Enigmail-Version: 0.84.2.0
X-Enigmail-Supports: pgp-inline, pgp-mime
> John's idea of point and click to extract arguments has merit. Of course
you
> still want procedures you can embed in code that you may need to write.
In
> that case, the aforementioned use of op, map and indexing is still
required.
I may have misinterpreted John, but I thought he was talking about a
way of asking MuPAD Òif I wanted to extract this, what call
would I
need?Ó I'm not sure how a user interface would look like
that allows
to make this question sufficiently generic. (In the case at hand, the
answer should not have been Ò[Sol[1][3][1], Sol[1][3][2],
Sol[2][3][1]]Ó
instead of the map call, yet building this call requires some more
insight into the nature of the problem than having some subexpressions
marked.)
> This leads me to thinking a How To document with several examples would
be
> very useful. It is often the case that a user of MuPad does not want to
> simply take output from a command to MuPad, but the user often wants do
> something with it i.e. feed it into another command.
absolutely no idea why someone would try to put the eigenvectors of a
matrix into linalg::orthog. Obviously, the resulting vectors will in
general not be eigenvectors any longer:
>> A := matrix([[1,2,3],[4,5,6],[9,8,7]]):
>> S := linalg::eigenvectors(A):
>> P := linalg::orthog(map(S, op@op, 3)):
// This call is what I mean, but it triggers a bug in Dom::Matrix,
// which I already sent to the persons in charge of matrices:
// >> zip(A*P[2], P[2], normal@`/`)
// So I do it this way:
>> zip([op(A*P[2])], [op(P[2])], normal@`/`)
-- 1/2 1/2 1/2 --
| 54 313 - 810 117 313 - 1953 156 313 - 3000 |
| ---------------, -----------------, ----------------- |
| 1/2 1/2 1/2 |
-- 313 - 103 7 313 - 127 13 313 - 151 --
>> float(%)
[-1.703910468, -37.03770902, -3.039216243]
I do agree that MuPAD's various functions do not yet do a really good
job of accepting one another's output, but as for transforming them,
I'm afraid the main problem is finding good examples of things that can
be explained, yet do not work out of the box anyway.
> showing by example what to do may be useful. For example, I was unaware
of
> op@op which is a nice little nick nack described in this current set of
> newsgroup replies.
Granted, the only examples using map(A, f@g) in the tutorium are in the
answers to exercises. Perhaps I should add a few of these where map is
introduced.
It's always a pleasure to do so. (I know I hadn't raised my fingers
in this thread yet.)
Christopher Creutzig
===
Subject: Re: MuPad 3.0 extraction of individual eigenvectors
>> John's idea of point and click to extract arguments has merit. Of
>> course you
>> still want procedures you can embed in code that you may need to
>> write. In
>> that case, the aforementioned use of op, map and indexing is still
>> required.
> I may have misinterpreted John, but I thought he was talking about a
> way of asking MuPAD Òif I wanted to extract this, what call
would I
> need?Ó I'm not sure how a user interface would look like
that allows
> to make this question sufficiently generic. (In the case at hand, the
> answer should not have been Ò[Sol[1][3][1], Sol[1][3][2],
Sol[2][3][1]]Ó
> instead of the map call, yet building this call requires some more
> insight into the nature of the problem than having some subexpressions
> marked.)
>> This leads me to thinking a How To document with several examples
>> would be
>> very useful. It is often the case that a user of MuPad does not want to
>> simply take output from a command to MuPad, but the user often wants do
>> something with it i.e. feed it into another command.
> absolutely no idea why someone would try to put the eigenvectors of a
> matrix into linalg::orthog. Obviously, the resulting vectors will in
> general not be eigenvectors any longer:
> >> A := matrix([[1,2,3],[4,5,6],[9,8,7]]):
> >> S := linalg::eigenvectors(A):
> >> P := linalg::orthog(map(S, op@op, 3)):
> // This call is what I mean, but it triggers a bug in Dom::Matrix,
> // which I already sent to the persons in charge of matrices:
> // >> zip(A*P[2], P[2], normal@`/`)
> // So I do it this way:
> >> zip([op(A*P[2])], [op(P[2])], normal@`/`)
> -- 1/2 1/2 1/2 --
> | 54 313 - 810 117 313 - 1953 156 313 - 3000 |
> | ---------------, -----------------, ----------------- |
> | 1/2 1/2 1/2 |
> -- 313 - 103 7 313 - 127 13 313 - 151 --
> >> float(%)
> [-1.703910468, -37.03770902, -3.039216243]
> I do agree that MuPAD's various functions do not yet do a really good
> job of accepting one another's output, but as for transforming them,
> I'm afraid the main problem is finding good examples of things that can
> be explained, yet do not work out of the box anyway.
>> showing by example what to do may be useful. For example, I was
>> unaware of
>> op@op which is a nice little nick nack described in this current set of
>> newsgroup replies.
> Granted, the only examples using map(A, f@g) in the tutorium are in the
> answers to exercises. Perhaps I should add a few of these where map is
> introduced.
> It's always a pleasure to do so. (I know I hadn't raised my fingers
> in this thread yet.)
Christopher,
You interpreted my response correctly. Copying and pasting is alright,
if you are using text output, but you have to do it again for each edit
of anything that goes before, and as it works now, you can't copy a
subexpression if you are using graphic output. You mentioned that you
would like to see a specific example; here is an example from solving an
equation set with Laplace transforms:
LapX:=op(op(op(op(op(slveqns,1),2)),1),2)
LapY:=op(op(op(op(op(slveqns,1),2)),2),2)
This worked, but all the nested ops and the numbers took some
experimentation to get right. In this case,
If you would like to see the context where this came up, this is a link
to the mnb file:
http://webpages.charter.net/oflaherty01/
--
john
===
Subject: Re: MuPad 3.0 extraction of individual eigenvectors
>This leads me to thinking a How To document with several examples would be
>very useful. It is often the case that a user of MuPad does not want to
>simply take output from a command to MuPad, but the user often wants do
>something with it i.e. feed it into another command.
>I don't think it has to be a particularly lengthy document, but it would
be
>good to have a set of examples graded from the pretty standard things
people
>want to do and also include a few of the more esoteric.
And here the problems starts. What are standard things because there
are so many MuPAD libraries and every user has another view to what is
important for him. I think you cannot solve this with one small
document. And there exist a lot of basic documentation. One resource
which is available is the tutorial. There you can select the chapters
about the topics you are interested in. Especially chapter 4 about the
MuPAD objects should demonstrate a lot of standard things. Then we
have the quick reference. With die MS Windows version there comes two
introduction notebooks. For german users (sorry not available in
english) there are a lot of examples available under:
http://schule.mupad.de/material/index.shtml
But if you have a good idea how such a how to document can be
structered we are very interested. Please let us know.
At least some advertising for a book. The book from M.Majewski:
MuPAD Pro Computing Essentials, ISBN: 3540219439
is a very good introduction to MuPAD.
In general I agree that for most MuPAD commands (where the output is
not clear), there should be one or two examples how to use the output
for another command or at least an option to get the specific data
directly. But again here is the question how verbose the documentation
should be. For e.g. standard data types like lists, sets, sequences
there should be normally no need to do this, because if you wants to
work effective with such a system some basics must be known.
Have a nice weeekend,
Torsten.
===
Subject: Re: MAPLE print content of myfile.m to maple standard output
Hello joe, my computer was down for a while (server problems) so i
could not try your suggestion until now.
I tried it, and this came out:
{mod}
i'm sure i did not just define one mod function :-)
bon, do you know what i did wrong?
greets,
stijn
(i'm using maple 8)
> hello, can anyone tell me what to do? I created a file myfile.m a
> while ago where i defined some functions. now i forgot how i named
> them, so i want maple to print the content of myfile.m to the standard
> maple output.
> i tried
>> interface(echo=2);
>> read(myfile.m);
> Try the following
> restart;
> origprocs := {anames(procedure)}:
> read(myfile.m):
> {anames(procedure)} minus origprocs;
> Another possibility, with Maple 9, is to use
> into a repository and then view it using LibraryTools:-Browse (after
> setting libname to include to the repository.
> Joe
===
Subject: Re: MAPLE print content of myfile.m to maple standard output
> Hello joe, my computer was down for a while (server problems) so i
> could not try your suggestion until now.
> I tried it, and this came out:
> {mod}
Also, make sure that maple hasn't read the m-file before you
assign oldnames, otherwise it is included in both sets and
won't show up when you do the minus.
You could always do the following, after reading your file,
map(lprint, sort(map(rcurry(convert,string),[anames](procedure)));
and then scan the column of strings for anything you recognize.
This lists all the assigned procedures, so there will be a lot
of extraneous stuff.
Joe
===
Subject: Re: MAPLE print content of myfile.m to maple standard output
> Hello joe, my computer was down for a while (server problems) so i
> could not try your suggestion until now.
> I tried it, and this came out:
> {mod}
> i'm sure i did not just define one mod function :-)
> bon, do you know what i did wrong?
> greets,
> stijn
I forgot about the mod procedure. It will invariably get
added to the set, probably because it is reassigned to modp.
One solution is to convert the items to strings:
> restart:
> joe := proc() whatever end:
> save joe, /home/joe/tmp/joe.m:
> restart;
> oldnames := map(convert,{anames}(procedure),string):
> read /home/joe/tmp/joe.m:
> map(convert,{anames}(procedure),string) minus oldnames;
{joe}
That yours returned the set {mod} indicates the
m-file assigns no procedures.
Joe
===
Subject: Re: which semigroups are rings?
Originator: israel@math.ubc.ca (Robert Israel)
> Is any semigroup can be endowed with a structure of an abelian group
> such that the distributive laws are satisfied?
> Are there any necessary or sufficient conditions known for a semigroup
> to allow a structure of an associative ring?
> [ Moderator's note:
> I assume Magpie wants the semigroup operation to be the
multiplication
> and the new abelian group operation to be the addition.
> -RI
I don't know how helpful it is but quite a bit is known about the
complementary problem : Given an Abelian group, what rings have that
group as its additive structure. See L. Fuchs; Infinite Abelian
Groups, Vol II.
--
Paul Sperry
Columbia, SC (USA)
===
Subject: Non-central Chi-square with different sigma^2, How to find the
distribution
Originator: israel@math.ubc.ca (Robert Israel)
V_1 is a non-central chi-square distrbution of degree
2 with non-central parameter e. V_2 is a central
chi-square RV of degree 2. V_1 and V_2 are
independent. My goal is to find an optimal x to
maximize the probability P{ (1-x)V_1+xV_2 > r}, where
00. I know that the density of
(1-x)V_1+xV_2 can be expressed in convolution. But it
does not help in finding the optimal x. So my question
is
1) Is there a closed form expression for the pdf of
(1-x)V_1+xV_2 or P{ (1-x)V_1+xV_2 > r} so that the
optimization can be done in closed form?
2) For any r, is P{ (1-x)V_1+xV_2 > r} a convex or
concave function of x so that the maximum is unique.
3) Under what conditions does x=0 give the maximum.
Jin Zhang
===
Subject: Consultants needed -- test development - brief term
Content-Length: 1621
Originator: rusin@vesuvius
At Polytechnic University we have an NSF grant to develop a Calculus
Concept Inventory -- a test for qualitative, conceptual understanding
of the most basic principles of differential calculus. The test is
expected, following the example of the Force Concept Inventory in
physics, to allow determination of whether different teaching
methodologies really do what it is claimed that they do.
We are seeking two persons immediately to serve on the Item Writing
Panel for the first round of item development. Modest consultant fees
will be paid, please inquire for details. Experience in some depth in
the teaching of calculus is essential, and particularly in the
creation of probing, conceptual test items. Work begins as soon as we
choose the panel and continues through the end of January. Continued
item development will take place in the Spring semester and persons
hired may be asked if they wish to continue into the Spring semester.
Attendance at a meeting in or near New York the weekend of January 22
is mandatory (we pay transport, housing, and food of course). Contact
Jerry Epstein:
jepstein@poly.edu or (718) 260-3572.
Inquiries for more information are welcome. Please give a phone number
and good time to call with any inquiry. Applicants should send 5 to 10
sample test items to the above email address. Items for the final test
will have to be free of copyright restrictions, so sample test items
must not be copied from, or very similar to, questions from
established text books. Person who have some years of experience
developing their own test questions in calculus are earnestly sought.
===
Subject: Re: normal subgroups of surface groups
Epigone-thread: skotradeh
Content-Length: 611
Originator: rusin@vesuvius
>How can we show that every finitely generated normal subgroup of a
>non-abelian
>surface group (with or without boundary) is of finite index?
A finitely generated surface group is geometrically finite, so its
convex core
has finite (co)area. It's not hard to see that the limit set of a
normal subgroup
must be the same as that of the whole group, and thus the convex core
of a normal
subgroup is the entire surface. But then the surface cover
corresponding to
the finitely generated normal subgroup must have finite area,
so it is finite index.
===
Subject: Re: Multidimensional Abel's/Schroder's functional equations
Content-Length: 1706
Originator: rusin@vesuvius
> If it is possible, please, give some reference to literature with
> multidimensional case.
> Unfortunately, in the literature, I have, the only univariate version
> is considered. For example:
> Kuczma M. Functional Equations in a Single Variable. Warszawa:
> PWN-Polish Scientific Publishers, 1968. 383 p.
> Cermak J. Note on Simultaneous Solutions of a System of Schroder?s
> Equations // Mathematica Bohemica. Vol. 120. Õ3. 1995. P.
225?236.
> Dubuc S. Problemes relatifs a literation de fonctions suggeres par les
> processus en cascade // Annales de l'institut Fourier. Vol. 21.
Õ1.
> 1971. P. 171?251.
> Szekeres G. Abel's Equation and Regular Growth: Variations on a Theme
> by Abel // Experimental Mathematics, Vol. 7 (1998), No. 2. P. 85-100.
> Szekeres G. Regular iterations of real and complex function // Acta
> Math. 100. 1958. P. 202-258.
> I have no book:
> Kuczma M., Choczewski B., Ger R. Iterative Functional Equations.
> Cambridge: Cambridge University Press, 1990. 571 p.
> May be you know, if it contains multidimensional case.
Hi Mikhail,
A particular case : when g are polynomials is related with Mahler's
functionnal equation . These are importants for transcendental number
theory.
In fact I am interested to know solution in other extension of these
equations such that f(x^4)= r(x)f(x^2)+s(x)f(x)
For your problem hope you may find something in polynomials cases in
Kumiko NISHIOKA Lectures Notes N¡1631 Mahler functions and
transcendence
(may be something in the references included in this book).
Laurent
===
Subject: This week in the mathematics arXiv (1 Nov - 5 Nov)
Content-Length: 27689
Originator: rusin@vesuvius
===
Subject: This week in the mathematics arXiv (1 Nov - 5 Nov)
Here are this week's titles in the mathematics arXiv, available at:
http://front.math.ucdavis.edu/
http://front.math.ucdavis.edu/submissions
This week in the mathematics arXiv may be freely redistributed
with attribution and without modification.
Titles in the mathematics arXiv (1 Nov - 5 Nov)
-----------------------------------------------
AC: Commutative Algebra
-----------------------
math.AC/0411061
Stephen Humphries, Christian Krattenthaler: Trace identities from
identities
for determinants
math.AC/0411020
Christopher Francisco: Resolutions of small sets of fat points
math.AC/0410625
Corina Baciu, Viviana Ene, Gerhard Pfister, Dorin Popescu: Rank two
Cohen-Macaulay modules over singularities of type
$x_1^3+x_2^3+x_3^3+x_4^3$
AG: Algebraic Geometry
----------------------
math.AG/0411101
Markus Reineke: Framed quiver moduli, cohomology, and quantum groups
math.AG/0411097
Erwan brugalle: Real plane algebraic curves with asymptotically maximal
number of even ovals
math.AG/0411094
Norbert Hoffmann, Ulrich Stuhler: Moduli schemes of rank one Azumaya
modules
math.AG/0411081
Xiaoguang Ma, Jian Zhou: Elliptic Genera of Complete Intersections
math.AG/0411073
C. Casagrande: The number of vertices of a Fano polytope
math.AG/0411064
H. Lange, P. E. Newstead: Coherent Systems on Elliptic curves
math.AG/0411059
I. Bouw, T. Chinburg, G. Cornelissen, C. Gasbarri, D. Glass, C. Lehr, M.
Matignon, F. Oort, R. Pries, S. Wewers: Problems from the workshop on
hep-th/0411037
Tommaso de Fernex, Ernesto Lupercio, Thomas Nevins, Bernardo Uribe: A
Localization Principle for Orbifold Theories
math.AG/0411051
Hirotachi Abo: Construction of rational surfaces of degree 12 in
projective
fourspace (with an appendix by Kristian Ranestad)
math.AG/0411049
Christopher D. Hacon, S'andor J. Kov'acs: Holomorphic one-forms on
varieties of general type
math.AG/0411045
F.J. Calderon-Moreno, L. Narvaez-Macarro: Dualit'e et comparaison sur
les
complexes de de Rham logarithmiques par rapport aux diviseurs libres
math.AG/0411038
Chien-Hao Liu, Shing-Tung Yau: Extracting Gromov-Witten invariants of a
conifold from semi-stable reduction and relative GW invariants of pairs
math.AG/0411037
Jim Bryan, Rahul Pandharipande: The local Gromov-Witten theory of curves
math.AG/0411022
Euisung Park: On higher syzygies of ruled surfaces II
math.AG/0410613
Brian Osserman: The generalized Verschiebung map for curves of genus 2
math.AG/0410612
Shunsuke Takagi: Formulas for multiplier ideals on singular varieties
math.AG/0410611
J. Fern'andez de Bobadilla, I. Luengo-Velasco, A. Melle-Hern'andez,
A.
N'emethi: On rational cuspidal projective plane curves
AP: Analysis of PDEs
--------------------
math.AP/0411036
Nirmalendu Chaudhuri, Neil S. Trudinger: A note on Alxesandrov type
theorem
for k-convex functions
math.AP/0411032
YanYan Li, Lei Zhang: Compactness of solutions to the Yamabe problem. II
math.AP/0411001
Andras Vasy, Jared Wunsch: Absence of super-exponentially decaying
eigenfunctions on Riemannian manifolds with pinched negative curvature
math.AP/0410619
M. Berti, L. Biasco: Forced vibrations of wave equations with
non-monotone
nonlinearities
math.AP/0410618
M. Berti, P. Bolle: Cantor families of periodic solutions for completely
resonant nonlinear wave equations
AT: Algebraic Topology
----------------------
math.AT/0411080
Nils A. Baas, Ralph L. Cohen, Antonio Ramirez: The topology of the
category
of open and closed strings
math.AT/0411043
Kiyonori Gomi: Differential characters and the Steenrod squares
CA: Classical Analysis and ODEs
-------------------------------
math.CA/0411044
Vyacheslav P. Spiridonov, S. Ole Warnaar: Inversions of integral
operators
and elliptic beta integrals on root systems
math.CA/0411042
Timoteo Carletti, Lilia Rosati, Gabriele Villari: Qualitative analysis of
phase--portrait for a class of planar vector fields via the comparison
method
math.CA/0411004
Stephen Semmes: Potpourri, 9
CO: Combinatorics
-----------------
math.CO/0411098
Shlomo Hoory, Alex Brodsky: Simple Permutations Mix Even Better
math.CO/0411095
Terence Tao, Van Vu: On random $pm 1$ matrices: Singularity and
Determinant
hep-th/0411044
Patrick Desrosiers, Luc Lapointe, Pierre Mathieu: Jack superpolynomials:
physical and combinatorial definitions
math.CO/0411072
Cilanne Boulet, Igor Pak: A combinatorial proof of the Rogers-Ramanujan
and
Schur identities
math.CO/0411052
Kennan Shelton, Michael Siler: Variations of a Coin-Removal Problem
math.CO/0411041
Ewa Borak: A note on special duality triads and their operator valued
counterparts
math.CO/0411028
Kimmo Eriksson, Jonas Sjostrand, Pontus Strimling: Conjectures on
three-dimensional stable matching
math.CO/0411026
Andrey O. Matveev: Relative blocking in posets
math.CO/0411025
Andrey O. Matveev: Maps on posets, and blockers
math.CO/0411012
Thorsten Theobald: On the frontiers of polynomial computations in
tropical
geometry
math.CO/0411009
Eran Nevo: Embeddability and Stresses of Graphs
math.CO/0411007
Ewa Krot: The First Ascent into the Incidence Algebra of the Fibonacci
Cobweb Poset
math.CO/0411002
A. K. Kwasniewski: On umbral extensions of Stirling numbers and
Dobinski-like formulas
math.CO/0410614
Han Heewon: A mathematical proof of four color theorem
CT: Category Theory
-------------------
math.CT/0411055
Nicholas Jackson: Rack and quandle homology
CV: Complex Variables
---------------------
math.CV/0411100
Claudio Meneghini: Clifton-Pohl torus and geodesic completeness by a
'complex' point of view
math.CV/0411090
Guy Laville, Ivan Ramadanoff: Stone-Weierstrass Theorem
math.CV/0411086
Marco Abate, Filippo Bracci: Ritt's theorem and the Heins map in
hyperbolic
complex manifolds
math.CV/0411083
Sarkis Frederic: On nonimbeddability of topologically trivial domains and
Thin Hartogs figures of $P_2(mathbb{C})$ into Stein spaces
math.CV/0411048
Franc Forstneric: Extending holomorphic mappings from subvarieties in
Stein
manifolds
DG: Differential Geometry
-------------------------
math.DG/0411079
Xuanguo Huang: The First Eigenvalue for Compact Minimal Embedded
Hypersurface in $S^{n+1}(1)$ ($ngeq 3$) is $n$
math.DG/0411074
Bruno Colbois, Constantin Vernicos: Bas du spectre et
delta-hyperbolicit'e
en g'eom'etrie de hilbert plane
math.DG/0411070
D.Bashkirov, G.Giachetta, L.Mangiarotti, G.Sardanashvily: Noether's
second
theorem in a general setting. Reducible gauge theories
math.DG/0411066
Sebastien Racaniere: Quantisation of Lie-Poisson manifolds
math.DG/0411058
P. N. Ivanshin: Existence of the Ehresmann connection on a manifold
foliated
by the locally free action of a commutative Lie group
math.DG/0411056
Bernd Fiedler: Generators of algebraic curvature tensors based on a
(2,1)-symmetry
hep-th/0411015
J.M. Isidro: Generalised Complex Geometry and the Planck Cone
math.DG/0411030
Ronaldo Garcia, Jorge Sotomayor: On the Patterns of Principal Curvature
Lines around a Curve of Umbilic Points
math.DG/0411024
Yuhan Lim: Defining an SU(3)-Casson/U(2)-Seiberg-Witten integer invariant
for integral homology 3-spheres
math.DG/0411023
Bozhidar Z. Iliev: Linear Transports along Paths in Vector Bundles. I.
General Theory
math.DG/0411010
Knut Smoczyk, Guofang Wang, Y. L. Xin: Mean curvature flow with flat
normal
bundles
DS: Dynamical Systems
---------------------
math.DS/0411085
Marco Abate, Francesca Tovena: Formal normal forms for holomorphic maps
tangent to the identity
FA: Functional Analysis
-----------------------
math.FA/0411067
H. G. Dales, J. F. Feinstein: Banach function algebras with dense
invertible
group
math.FA/0411018
Ravi Montenegro: A sharp isoperimetric bound for convex bodies
GN: General Topology
--------------------
math.GN/0410624
Siofilisi Hingano: On uniformities and uniformly continuous functions on
factor-spaces of topological groups
GR: Group Theory
----------------
math.GR/0411077
Bettina Eick, Delaram Kahrobaei: Polycyclic groups: A new platform for
cryptology?
math.GR/0411076
Delaram Kahrobaei: A simple proof of a theorem of Karrass and Solitar
math.GR/0411075
Delaram Kahrobaei: The Amalgamated product of free groups and residual
solvability
math.GR/0411039
D.V. Osin: Small cancellations over relatively hyperbolic groups and
embedding theorems
math.GR/0411027
D.V. Osin: Relative Dehn functions of amalgamated products and
HNN--extensions
math.GR/0410616
Sean Cleary, Murray Elder, Jennifer Taback: Cone types and geodesic
languages for lamplighter groups and Thompson's group F
GT: Geometric Topology
----------------------
math.GT/0411088
Christine Lescop: On the Kontsevich-Kuperberg-Thurston construction of a
configuration-space invariant for rational homology 3-spheres
math.GT/0411078
Hee Jung Kim: Modifying surfaces in 4-manifolds by twist spinning
math.GT/0411065
Jesse Johnson: Locally Unknotted Spines of Heegaard Splittings
math.GT/0411060
O.N. Karpenkov: Energy of a knot: variational principles; Mm-energy
math.GT/0411057
Tim D. Cochran, Peter Teichner: Knot concordance and von Neumann
$rho$-invariants
math.GT/0411053
Nathan Geer: The Kontsevich integral and quantized Lie superalgebras
hep-th/0411010
Dave Auckly, Martin Speight: Fermionic quantization and configuration
spaces
for the Skyrme and Faddeev-Hopf models
math.GT/0411050
S. Francaviglia, B. Klaff: Maximal volume representations are fuchsian
math.GT/0411016
Sungbok Hong, Darryl McCullough, J. Hyam Rubinstein: The Smale Conjecture
for lens spaces
math.GT/0410615
J.A.Hillman, S.K.Roushon: Surgery on
$widetilde{Bbb{SL}}timesBbb{E}^n$-manifolds
HO: History and Overview
------------------------
math.HO/0411091
G. J. Chaitin: Irreducible Complexity in Pure Mathematics
MG: Metric Geometry
-------------------
math.MG/0411093
Allan L. Edmonds, Mowaffaq Hajja, Horst Martini: Coincidences of simplex
centers and related facial structures
math.MG/0411092
Andreas Paffenholz: New polytopes from products
MP: Mathematical Physics
------------------------
nlin.SI/0410049
A. Sergyeyev: Why nonlocal recursion operators produce local symmetries:
new
results and applications
nlin.SI/0410029
A. Sergyeyev: Towards classification of conditionally integrable
evolution
systems in (1+1) dimensions
math-ph/0411022
Vincent Caudrelier, Eric Ragoucy: Spontaneous symmetry breaking in the
non-linear Schrodinger hierarchy with defect
math-ph/0411021
Daniel Arnaudon, Nicolas Crampe, Anastasia Doikou, Luc Frappat, Eric
Ragoucy: Analytical Bethe Ansatz for closed and open gl(n)-spin chains in
any representation
math-ph/0411020
Przemyslaw Repetowicz, Peter Richmond: The Wick theorem for non-Gaussian
distributions and its application for noise filtering of correlated
q-Exponentialy distributed random variables
math-ph/0411019
T. M. Garoni: On the asymptotics of some large Hankel determinants
generated
by Fisher-Hartwig symbols defined on the real line
math-ph/0411018
JA Foxman, JM Robbins: Singularities, Lax degeneracies and Maslov indices
of
the periodic Toda chain
math-ph/0411017
JA Foxman, JM Robbins: The Maslov index and nondegenerate singularities
of
integrable systems
math-ph/0411016
I. V. Krasovsky: Absolute value of the characteristic polynomial in the
Gaussian Unitary Ensemble or a singular Hankel determinant
math-ph/0411015
G. van Baalen: Downstream asymptotics in exterior domains: from
stationary
wakes to time periodic flows
math-ph/0411014
Antonella D'Avanzo, Giuseppe Marmo: Reduction and unfolding: the Kepler
problem
math-ph/0411013
Taichiro Takagi: Separation of colour degree of freedom from dynamics in
a
soliton cellular automaton
math-ph/0411012
Jochen Bruening, Sergey Dobrokhotov, Konstantin Pankrashkin: The Spectral
Asymptotics of the Two-Dimensional Schrodinger operator with a Strong
Magnetic Field
math-ph/0411011
D. Bambusi & B. Grebert: Birfhoff Normal Form for PDEs with Tame Modulus
hep-th/0411030
G. Akemann, J.C. Osborn, K. Splittorff, J.J.M. Verbaarschot: Unquenched
QCD
Dirac Operator Spectra at Nonzero Baryon Chemical Potential
nlin.CD/0410066
M. Bernardo, M. Courbage, T.T. Truong: Multidimensional Gaussian sums
arising from distribution of Birkhoff sums in zero entropy dynamical
systems
math-ph/0411010
R. Perez-Alvarez, F. Garcia-Moliner: Transfer Matrices and Green
Functions
for the study of elementary excitations in multilayered heterostructures
math-ph/0411009
Fabian Brau: Lower bounds for the spinless Salpeter equation
math-ph/0411008
Fabian Brau: Sufficient conditions for the existence of bound states in a
central potential
math-ph/0411007
G.H.M. van der Heijden, M.A. Peletier, R. Planqu'e: Self-contact for
rods
on cylinders
math-ph/0411006
O.N. Kirillov, A.A. Mailybaev, A.P. Seyranian: Unfolding of eigenvalue
surfaces near a diabolic point due to a complex perturbation
math-ph/0411005
Fabian Brau, Monique Lassaut: Critical strength of attractive central
potentials
hep-th/0411016
I. I. Cotu{a}escu, M. Visinescu: Symmetries and supersymmetries of the
Dirac operators in curved spacetimes
hep-th/0411005
G.Giachetta, L.Mangiarotti, G.Sardanashvily: Polysymplectic Hamiltonian
formalism and some quantum outcomes
nucl-th/0410114
G. Hagen, M. Hjorth-Jensen, J.S. Vaagen: Effective Interaction Techniques
for the Gamow Shell Model
nlin.SI/0410043
M. Bertola, B. Eynard, J. Harnad: Semiclassical orthogonal polynomials,
matrix models and isomonodromic tau functions
math-ph/0411004
Valeri V. Dvoeglazov, J. L. Quintanar Gonzalez: Helicity Basis for Spin
1/2
and 1, and Discrete Symmetry Operations
math-ph/0411003
Peter Kuchment: Quantum Graphs II: Some spectral properties of quantum
and
combinatorial graphs
math-ph/0411002
Kaihua Cai: Dispersion for Schrodinger Operators with One-gap
Periodic
Potentials on R^1
math-ph/0411001
F. Bentosela, P. Duclos, G. Nenciu, V. Moldoveanu: The dynamics of 1D
Bloch
electrons in constant electric fields
math-ph/0410059
Manfred Requardt: Supersymmetry on Graphs and Networks
cond-mat/0410701
Ekrem Aydiner: Anomalous Rotational Relaxation: A Fractional
Fokker-Planck
Equation Approach
hep-th/0410284
R. Jackiw: Inserting Group Variables into Fluid Mechanics
hep-th/0410277
Gerhard Mack, Mathias de Riese: Simple Space-Time Symmetries:
Generalizing
Conformal Field Theory
NT: Number Theory
-----------------
math.NT/0411099
Alexey Zykin: The Brauer-Siegel and Tsfasman-Vladut Theorems for Almost
Normal Extensions of Number Fields
math.NT/0411096
M. Sabitova: Root numbers of curves
math.NT/0411089
Kent E. Morrison: The polynomial analogue of a theorem of Renyi
math.NT/0411087
Marco Dalai: Recurrence relations for the Lerch Phi function and
applications
math.NT/0411084
Patrice Philippon & Martin Sombra: Quelques aspects diophantiens des
varietes toriques projectives
math.NT/0411054
O.N. Karpenkov: On examples of two-dimensional periodic continued
fractions
math.NT/0411040
Aleksandar Ivi'c: The Mellin transform of the square of Riemann's
zeta-function
math.NT/0411035
Mahdi Asgari, Freydoon Shahidi: Generic Transfer for General Spin Groups
cs.DM/0411002
Alex Vinokur: Fibonacci-Like Polynomials Produced by m-ary Huffman Codes
for
Absolutely Ordered Sequences
math.NT/0411031
O.N. Karpenkov: On constructing multidimensional periodic continued
fractions
math.NT/0411005
Alfred J. van der Poorten: Quadratic irrational integers with partly
prescribed continued fraction expansion
math.NT/0410617
Nicole Lemire, Jan Minac, John Swallow: When is Galois cohomology free or
trivial?
OA: Operator Algebras
---------------------
math.OA/0411062
Boris Tsirelson: On automorphisms of type II Arveson systems
(probabilistic
approach)
math.OA/0411021
Alan L. Carey, John Phillips, Adam Rennie, Fyodor A. Sukochev: The Local
Index Formula in Semifinite von Neumann Algebras II: The Even Case
math.OA/0411019
Alan L. Carey, John Phillips, Adam Rennie, Fyodor A. Sukochev: The Local
Index Formula in Semifinite von Neumann Algebras I: Spectral Flow
PR: Probability
---------------
math.PR/0411071
Rick Durrett, Jason Schweinsberg: A coalescent model for the effect of
advantageous mutations on the genealogy of a population
math.PR/0411069
Jason Schweinsberg, Rick Durrett: Random partitions approximating the
coalescence of lineages during a selective sweep
math.PR/0411011
Nathanael Berestycki: The hyperbolic geometry of random transpositions
math.PR/0411008
Sergio Albeverio, Carlo Marinelli: On the reconstruction of the drift of
a
diffusion from transition probabilities which are partially observed in
space
math.PR/0410622
Jason Fulman: Stein's Method and Minimum Parsimony Distance after
Shuffles
QA: Quantum Algebra
-------------------
hep-th/0411020
Louise Dolan, Chiara R. Nappi: Spin Models and Superconformal Yang-Mills
Theory
math.QA/0411029
Patrick M. Gilmer, Gregor Masbaum: Integral Lattices in TQFT
math.QA/0411003
Masoud Khalkhali, Bahram Rangipour: Cup Products in Hopf-Cyclic
Cohomology
math.QA/0410621
Alastair Hamilton, Andrey Lazarev: Homotopy algebras and noncommutative
geometry
RA: Rings and Algebras
----------------------
math.RA/0411082
Vesselin Drensky, Georgi K. Genov, Angela Valenti: Multiplicities in the
mixed trace cocharacter sequence of two $3times 3$ matrices
math.RA/0411063
Anders Skovsted Buch: Eigenvalues of Hermitian matrices with positive sum
of
bounded rank
math.RA/0411046
Agata Smoktunowicz: On primitive ideals in polynomial rings over nil
rings
math.RA/0410626
J. Gomez-Torrecillas, M. Zarouali Darkaoui: Frobenius functors for
corings
math.RA/0410620
Amnon Neeman, Andrew Ranicki: Noncommutative localisation in algebraic
K-theory I
RT: Representation Theory
-------------------------
math.RT/0411017
V. Kreiman, V. Lakshmibai, P. Magyar, J. Weyman: Standard Bases for
Affine
SL(n)-Modules
math.RT/0411006
Hiroshi Oda, Toshio Oshima: Minimal polynomials and annihilators of
generalized Verma modules of the scalar type
SG: Symplectic Geometry
-----------------------
math.SG/0411068
Simon Hochgerner: Singular cotangent bundle reduction and spin
Calogero-Moser systems
math.SG/0411015
Ciprian Manolescu: Nilpotent slices, Hilbert schemes, and the Jones
polynomial
math.SG/0411014
Mei-Lin Yau: Vanishing of the contact homology of overtwisted contact
3--manifolds
math.SG/0410623
Mei-Lin Yau: Invariants of Lagrangian surfaces
SP: Spectral Theory
-------------------
math.SP/0411013
J. Fleckinger, E.M. Harrell, F. de Th'elin: On the fundamental
eigenvalue
ratio of the p-Laplacian
ST: Statistics
--------------
math.ST/0411047
Vladislav Kargin, Alexei Onatski: Dynamics of Interest Rate Curve by
Functional Auto-Regression
physics/0411003
Alvaro Gonzalez, Javier B. Gomez, Amalio F. Pacheco: Updating Seismic
Hazard
at Parkfield
math.ST/0411034
Jianqing Fan: A selective overview of nonparametric methods in financial
econometrics
math.ST/0411033
Sergey Tarima, Yuriy Dmitriev, Richard Kryscio: A hierarchical technique
for
estimating location parameter in the presence of missing data
q-bio.GN/0410033
A.N. Gorban, T.G. Popova, A.Yu. Zinovyev: Four basic symmetry types in
the
universal 7-cluster structure of 143 complete bacterial genomic sequences
--
/ Greg Kuperberg (UC Davis)
/ Home page: http://www.math.ucdavis.edu/~greg/
/ Visit the Math ArXiv Front at http://front.math.ucdavis.edu/
/ * All the math that's fit to e-print *
===
Subject: Re: FFT result evaluation
Content-Length: 1725
Originator: rusin@vesuvius
>I have a signal with harmonics that are not integer multiples of the
>fundamental frequency. So it is not possible to guarantee that the FFT -
>Window get a full period of each harmonic of the analysed signal. The
result
>of the FFT is a distributed spectrum for each harmonic and it is not
>possible to find out the correct magnitude of the harmonics. Is it
possible
>to calculate the correct signal magnitude from the distributed spectrum
for
>each harmonic ? I found a way to calculate the magnitude for each
>distributed spectrum, but I don't find the correct mathematical
description
>for this problem. Has anybody an idea how to define this mathematically.
The FFT (or any other DFT) is covering a finite
range in the time domain and equivalent to multiplication
of the time series by a boxcar function. The standard
convolution theorem of Fourier analysis describes
this in the frequency domain as a convolution with
the DFT of the boxcar, known as the sinc-function.
To avoid that the convolution with the negative pieces
of the oscillatory sinc-funcion in the frequency domain
removes or partially cancels the area under the
correct peak in the frequency domain, one usually
modifies the boxcar function to round up the sharp
edges in the time domain and avoid the Gibb's osciallations
of the sinc-function; the keywords to search for
these techniques are apodisation, Norton-Beer ...
This way one can to a good extend ensure that the
area under the dispersed peaks in the frequency
domain represents the original's series amplitudes.
http://www.strw.leidenuniv.nl/~mathar
===
Subject: solving SAT: generating extended resolution proofs using techniques
for resolution
Content-Length: 1124
Originator: rusin@vesuvius
The most successful techniques for solving SAT to date
work by searching for resolution refutations [1], [2], [3]. It is well
known that resolution refutations are exponential length
for some rather trivial problems (pigeon-hole, reordering XOR,
reordering addition, etc).
Extended resolution is resolution allowing definition
of new boolean variables. There are no problems in NP which
are known to require exponential length extended resolution
refutations (at least, I do not know of any).
So I have been trying to find a way to extend the techniques
which work so well at finding short resolution refutations to find
short extended resolution refutation.
I would appreciate it if people could post advice for me
or references or results which might be helpful to my
quest.
[1] Joao Silva, Karem Sakallah: GRASP - A New Search Algorithm for
Satisfiability,
1996 ICCAD proceedings
[2] Matthew Moskewicz, Conor Madigan, et al: Chaff - Engineering an
Efficient SAT Solver
[3] Evgueni Goldberg, Yakov Novikov: BerkMin: a Fast and Robust Sat-Solver
- Will Naylor
email: pub@willnaylor.com