Subject: Re: Plot axis numbers
Modify the automatic Ticks
Show[plt = Plot[x, {x, 0, 1.9},
DisplayFunction -> Identity],
DisplayFunction -> $DisplayFunction,
Ticks -> {Automatic,
((Ticks /. AbsoluteOptions[plt, Ticks]) /.
{y_?NumberQ, yl_?NumberQ, r__} :>
{y, PaddedForm[yl, {3, 2}], r})[[2]]}];
===
> Subject: Plot axis numbers
> In short:
> How can I ensure that all numbers on the plot axes have the same number
> of digits?
> If e.g. the y-axis in a plot goes from 0 to 1 in steps of 0.25, then the
> axis numbers will be e.g 0.25 and 0.5. I want this to be 0.25 and
> 0.50. Any suggestions other than manually adjusting the numbers with the
> FrameTicks option?
> - T.M.S¿rensen
> (Mathematica 5)
===
Subject: New Article : ShadowPlot3D
does anyone have trouble with the graphics3d package on version 5,1 and
specifically with the ShadowPlot3Dfunction?is ther a solution for it to work
properly?Marc
===
Subject: Re: debugging
Sorry, I failed to send an example of how CallingFunction[] works. Here it
is.
First I define the function MinBiPartition[], which accepts several
arguments of different types and includes a branch to deal with errors
of
the kind treated by CallingFunction[]. MinBiPartition definition includes
calls to other here no defined functions (since they are part of a larger
system)
(* this load the package with Function Call *)
Needs[GeneralServices`CallingFunctionControl`];
Off[General::spell1,General::spell]
(* this defines MinBiPartition[] *)
ClearAll[MinBiPartition];
MinBiPartition::argument = Function call within the context [* `1` *]
has
failed because it submitted the improper argument [`2`]. The name of the
function whose body wrapped the faulty call is among those in the list `3`.
Valids arguments for this call must belong to one the following types:
`4`.;
MinBiPartition[side_?SingletonListQ]:= DoNothing
MinBiPartition[side_?VoydQ]:= DoNothing
MinBiPartition[side_?SingletonQ]:= DoNothing
MinBiPartition[side_?CompleteGraphQ]:= DoNothing
MinBiPartition[side_?FakeCliqueQ]:= DoNothing
MinBiPartition[side_?SinglePairListQ]:= DoNothing
MinBiPartition[side_?ProperFlatListQ]:= DoNothing
MinBiPartition[wrong___]:=With[{stack =
Stack[_][[1]]},Message[MinBiPartition::argument,stack,wrong,CallingFunct
io
n[ToString[Hold[stack]]],FunctionDomain[MinBiPartition]];AbortProtect[Abort[
];Print[Computation will be aborted due to improper function call.]]]
(* This is a instrumental function used to test CallingFunction[]*)
fff[]:= Module[{},one;two;MinBiPartition[Fake Invalid Argument];three;
four]
(* This is a another instrumental function used to test CallingFunction[]*)
Otherfff[]:= Module[{},one;two;MinBiPartition[Invalid Argument];three;
four]
(* This makes a wrong call to fff[]*)
fff[]
(* and the rest is the result rendered by the wrong call to fff[] *)
MinBiPartition::argument: Function call within the context [*
Module[{},one;two;MinBiPartition[Fake Invalid Argument];three;four] *] has
failed because it submitted the improper argument [Fake Invalid Argument].
The name of the function whose body wrapped the faulty call is among those
in the list {fff[]}. Valid arguments for this call must belong to one the
following types:
{MinBiPartition[side_?SingletonListQ],MinBiPartition[side_?VoydQ],MinBiParti
tion[side_?SingletonQ],MinBiPartition[side_?CompleteGraphQ],[LeftSkeleton]
1
4[RightSkeleton][LeftSkeleton]1[RightSkeleton][LeftSkeleton]1[Righ
tSkel
eton]],MinBiPartition[side_?SinglePairListQ],MinBiPartition[side_?ProperFlat
ListQ],MinBiPartition[wrong___]}.
Computation will be aborted due to improper function call.
$Aborted
-----Original Message-----
===
Subject: debugging
The lack of debugging tools is, no doubt, an issue in Mathematica.
debugging; mainly to control calling functions flow.
(* :Context: DFramework`DFrameworkMessage`*)
(* :Title: Design Framework Environment *)
(* :Author: E.Martin-Serrano - In Houston - Texas (USA) *)
(* :Mathematica Version:5.0.1 *)
(* :Summary: CallingFunction[stack] yields a list of functions. The
meaning
of this list of functions is that described ahead. Let us have a function
'f', whose body contains a call to another function 'f1'. Let us suppose
that the call to 'f1', inside the body of 'f', fails due to an improper
parameter passing. This failure may typically happen in situations of
uncertainty: either the system is under construction (debugging) or in
production when the parameters are produced dynamically from inputs
inherited from other functions. To identify and locate the source of the
problem, and the point of the failure, it is necessary to know three
things:
1) the piece of code wrapping the point from where 'f1' was called when the
failure came up, 2) The name of the parent function 'f' owning the piece of
code wrapping the calling point to 'f1' where the failure came up, and 3)
the domain of valid parameters for the function 'f1'. The necessity of this
utility comes from the fact that any function may, in principle, be invoked
from many different points or functions in a network-like manner, so there
is not a way to know in advance which function is the culprit of a
particular failure.*)
(* :Keywords: Testing, debugging, messages *)
(* :Contents:*)
(* :Discussion: The present implementation is the first version and is
mainly meant to describe the purpose of the utility itself, yet it seems to
be working properly as far as it is currently used. The implementation
should probably be improved in a more efficient and elegant manner*)
(* :Sources: The function 'Domain' was taken from Ted Ersek's Tricks and
slighted modify to accept arguments of the form ' arg___' *)
(* :Credits: The function 'Domain[]' in its original form is due to Ted
Ersek as mentioned above. David Park has been helping me a lot with his
suggestions about systematic package developing *)
(* :Warning: I have not performed a systematic thoroughly test *)
BeginPackage[GeneralServices`CallingFunctionControl`]
Unprotect[Evaluate[$Context<>*]]
FunctionDomain::usage=
FunctionDomain[function] computes the valid domain of the arguments
accepted by 'function[]'.
CallingFunction::usage=
CallingFunction[stack] yields a list of functions. The meaning of this
list of functions is that described ahead. Let us have a function 'f',
whose
body contains a call to another function 'f1'. Let us suppose that the call
to 'f1', inside the body of 'f', fails due to an improper parameter
passing.
This failure may typically happen in situations of uncertainty: either the
system is under construction (debugging) or in production when the
parameters are produced dynamically from inputs inherited from other
functions. To identify and locate the source of the problem, and the point
of the failure, it is necessary to know three things: 1) the piece of code
wrapping the point from where 'f1' was called when the failure came up; 2)
The name of the parent function, 'f', owning the piece of code that wraps
the calling point to 'f1' where the failure came up; and 3) the domain of
valid parameters for the function 'f1'. The necessity of this utility comes
from the fact that any function may, in principle, be invoked from many
different points or functions in a network-like manner, so that there is
not
an easy way to know in advance which function is the culprit of a
particular
failure.
Begin[`Private`]
FunctionDomain[f_]:=Module[{result},
result=HoldForm@@{First/@DownValues[f]};
result=result/.Verbatim[HoldPattern][e_][RuleDelayed]e
]
CallingFunction[stack_]:=
Module[{downvalues,callingcontext,stp,index,functions},
downvalues=
DownValues[#]&/@Cases[Map[ToExpression,Names[Context[]<>*]],_Symbol]//
Flatten;
callingcontext = StringDrop[stack,5];
callingcontext = StringDrop[callingcontext,-1];
stp =StringPosition[#,callingcontext]&/@(ToString[#]&/@downvalues);
index =
Table[If[stp[[i]][NotEqual]{},{i},{}],{i,Length[stp]}]//Flatten;
functions =StringDrop[ ToString[First[downvalues[[#]]]],12]&/@index;
StringDrop[#,-1]&/@functions
]
End[]
Protect[Evaluate[$Context<>*]]
EndPackage[]
-----Original Message-----
===
Subject: debugging
Do you mean it wasn't an error that referenced a line number? Some
errors in Mathematica do come with line numbers; I thought you had one
of those kind and were just looking for a way to display line numbers.
You could add your own type checking/debugging code that will produce
its own warning messages. That would help you locate the problem.
I usually start from the name of the function that produces an error
and tear my function apart, call by call, until I find the step
between which the output went from good to bad.
You may find ctrl+shift+b useful for this. You may also wish to make
use of the Interrupt[] , Abort[] and Throw/Catch calls.
> There are no line numbers in the reported bugs, of course.
> > You could load the code into the vi editor (or your favorite text
> > editor) and turn on the line numbers.
> > >
> > > My question concerns debugging. I've done googling to find a quick
> > > answer but have failed. I'm developming large functions and without
> > > line numbers telling me where the errors occurred, its becoming very
> > > difficult to develop code quickly. Can anyone point me in the right
> > > direction in terms of debugging multi line (at least 50+ lines)
> > > functions?
> > >
> > > dan
> > >
> > >
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: debugging
In all that, I see no clue how to get the stack argument to
CallingFunction. Without that, we can't call the function.
How about an example?
Bobby
> The lack of debugging tools is, no doubt, an issue in Mathematica.
> debugging; mainly to control calling functions flow.
> (* :Context: DFramework`DFrameworkMessage`*)
> (* :Title: Design Framework Environment *)
> (* :Author: E.Martin-Serrano - In Houston - Texas (USA) *)
> (* :Mathematica Version:5.0.1 *)
> (* :Summary: CallingFunction[stack] yields a list of functions. The
meaning
> of this list of functions is that described ahead. Let us have a function
> 'f', whose body contains a call to another function 'f1'. Let us suppose
> that the call to 'f1', inside the body of 'f', fails due to an improper
> parameter passing. This failure may typically happen in situations of
> uncertainty: either the system is under construction (debugging) or in
> production when the parameters are produced dynamically from inputs
> inherited from other functions. To identify and locate the source of the
> problem, and the point of the failure, it is necessary to know three
things:
> 1) the piece of code wrapping the point from where 'f1' was called when
the
> failure came up, 2) The name of the parent function 'f' owning the piece
of
> code wrapping the calling point to 'f1' where the failure came up, and 3)
> the domain of valid parameters for the function 'f1'. The necessity of
this
> utility comes from the fact that any function may, in principle, be
invoked
> from many different points or functions in a network-like manner, so
there
> is not a way to know in advance which function is the culprit of a
> particular failure.*)
> (* :Keywords: Testing, debugging, messages *)
> (* :Contents:*)
> (* :Discussion: The present implementation is the first version and is
> mainly meant to describe the purpose of the utility itself, yet it seems
to
> be working properly as far as it is currently used. The implementation
> should probably be improved in a more efficient and elegant manner*)
> (* :Sources: The function 'Domain' was taken from Ted Ersek's Tricks and
> slighted modify to accept arguments of the form ' arg___' *)
> (* :Credits: The function 'Domain[]' in its original form is due to Ted
> Ersek as mentioned above. David Park has been helping me a lot with his
> suggestions about systematic package developing *)
> (* :Warning: I have not performed a systematic thoroughly test *)
> BeginPackage[GeneralServices`CallingFunctionControl`]
> Unprotect[Evaluate[$Context<>*]]
> FunctionDomain::usage=
> FunctionDomain[function] computes the valid domain of the arguments
> accepted by 'function[]'.
> CallingFunction::usage=
> CallingFunction[stack] yields a list of functions. The meaning of
this
> list of functions is that described ahead. Let us have a function 'f',
whose
> body contains a call to another function 'f1'. Let us suppose that the
call
> to 'f1', inside the body of 'f', fails due to an improper parameter
passing.
> This failure may typically happen in situations of uncertainty: either
the
> system is under construction (debugging) or in production when the
> parameters are produced dynamically from inputs inherited from other
> functions. To identify and locate the source of the problem, and the
point
> of the failure, it is necessary to know three things: 1) the piece of
code
> wrapping the point from where 'f1' was called when the failure came up;
2)
> The name of the parent function, 'f', owning the piece of code that wraps
> the calling point to 'f1' where the failure came up; and 3) the domain of
> valid parameters for the function 'f1'. The necessity of this utility
comes
> from the fact that any function may, in principle, be invoked from many
> different points or functions in a network-like manner, so that there is
not
> an easy way to know in advance which function is the culprit of a
particular
> failure.
> Begin[`Private`]
> FunctionDomain[f_]:=Module[{result},
> result=HoldForm@@{First/@DownValues[f]};
> result=result/.Verbatim[HoldPattern][e_][RuleDelayed]e
> ]
> CallingFunction[stack_]:=
> Module[{downvalues,callingcontext,stp,index,functions},
> downvalues=
>DownValues[#]&/@Cases[Map[ToExpression,Names[Context[]<>*]],_Symbol]//
> Flatten;
> callingcontext = StringDrop[stack,5];
> callingcontext = StringDrop[callingcontext,-1];
> stp =StringPosition[#,callingcontext]&/@(ToString[#]&/@downvalues);
> index =
> Table[If[stp[[i]][NotEqual]{},{i},{}],{i,Length[stp]}]//Flatten;
> functions =StringDrop[ ToString[First[downvalues[[#]]]],12]&/@index;
> StringDrop[#,-1]&/@functions
> ]
> End[]
> Protect[Evaluate[$Context<>*]]
> EndPackage[]
> -----Original Message-----
===
> Subject: debugging
> Do you mean it wasn't an error that referenced a line number? Some
> errors in Mathematica do come with line numbers; I thought you had one
> of those kind and were just looking for a way to display line numbers.
> You could add your own type checking/debugging code that will produce
> its own warning messages. That would help you locate the problem.
> I usually start from the name of the function that produces an error
> and tear my function apart, call by call, until I find the step
> between which the output went from good to bad.
> You may find ctrl+shift+b useful for this. You may also wish to make
> use of the Interrupt[] , Abort[] and Throw/Catch calls.
>> There are no line numbers in the reported bugs, of course.
>> > You could load the code into the vi editor (or your favorite text
>> > editor) and turn on the line numbers.
>> >
>> > >
>> > > My question concerns debugging. I've done googling to find a quick
>> > > answer but have failed. I'm developming large functions and without
>> > > line numbers telling me where the errors occurred, its becoming very
>> > > difficult to develop code quickly. Can anyone point me in the right
>> > > direction in terms of debugging multi line (at least 50+ lines)
>> > > functions?
>> > >
>> > > dan
>> > >
>> > >
>> >
>> >
--
DrBob@bigfoot.com
===
Subject: Re: debugging
(Sorry again, I corrected some spelling mistakes in my previous post. I
hope
this is a bit clearer. I am always in a hurry)
Sorry, I failed to send an example of how CallingFunction[] works. Here it
is.
First I define the function MinBiPartition[], which accepts several
arguments of different types and includes a branch to deal with errors
of
the kind treated by CallingFunction[]. MinBiPartition definition includes
calls to other here no defined functions (since they are part of a larger
system)
(* this load the package with Function Call *)
Needs[GeneralServices`CallingFunctionControl`];
Off[General::spell1,General::spell]
(* this defines MinBiPartition[] *)
ClearAll[MinBiPartition];
MinBiPartition::argument = Function call within the context [* `1` *]
has
failed because it submitted the improper argument [`2`]. The name of the
function whose body wrapped the faulty call is among those in the list `3`.
Valids arguments for this call must belong to one the following types:
`4`.;
MinBiPartition[side_?SingletonListQ]:= DoNothing
MinBiPartition[side_?VoydQ]:= DoNothing
MinBiPartition[side_?SingletonQ]:= DoNothing
MinBiPartition[side_?CompleteGraphQ]:= DoNothing
MinBiPartition[side_?FakeCliqueQ]:= DoNothing
MinBiPartition[side_?SinglePairListQ]:= DoNothing
MinBiPartition[side_?ProperFlatListQ]:= DoNothing
MinBiPartition[wrong___]:=With[{stack =
Stack[_][[1]]},Message[MinBiPartition::argument,stack,wrong,CallingFunct
io
n[ToString[Hold[stack]]],FunctionDomain[MinBiPartition]];AbortProtect[Abort[
];Print[Computation will be aborted due to improper function call.]]]
(*This is an instrumental function used to test CallingFunction[], by
issuing a wrong call to MinBiPartition[] with Fake Invalid Argument as
the wrong argument; fake invalid argument could take any variable form
in
a dynamic environment *)
fff[]:= Module[{},one;two;MinBiPartition[Fake Invalid Argument];three;
four]
(* This is another instrumental function used to test CallingFunction[], by
issuing a wrong call to MinBiPartition[] with Invalid Argument as the
wrong argument; invalid argument could take any variable form in a
dynamic
environment *)
Otherfff[]:= Module[{},one;two;MinBiPartition[Invalid Argument];three;
four]
(* This makes a wrong call to MinBiPartition[] through the call to fff[].
Actually it is *)
fff[]
(* and the rest is the result rendered by the wrong call to fff[] *)
MinBiPartition::argument: Function call within the context [*
Module[{},one;two;MinBiPartition[Fake Invalid Argument];three;four] *] has
failed because it submitted the improper argument [Fake Invalid Argument].
The name of the function whose body wrapped the faulty call is among those
in the list {fff[]}. Valid arguments for this call must belong to one the
following types:
{MinBiPartition[side_?SingletonListQ],MinBiPartition[side_?VoydQ],MinBiParti
tion[side_?SingletonQ],MinBiPartition[side_?CompleteGraphQ],[LeftSkeleton]
1
4[RightSkeleton][LeftSkeleton]1[RightSkeleton][LeftSkeleton]1[Righ
tSkel
eton]],MinBiPartition[side_?SinglePairListQ],MinBiPartition[side_?ProperFlat
ListQ],MinBiPartition[wrong___]}.
Computation will be aborted due to improper function call.
$Aborted
-----Original Message-----
===
Subject: debugging
The lack of debugging tools is, no doubt, an issue in Mathematica.
debugging; mainly to control calling functions flow.
(* :Context: DFramework`DFrameworkMessage`*)
(* :Title: Design Framework Environment *)
(* :Author: E.Martin-Serrano - In Houston - Texas (USA) *)
(* :Mathematica Version:5.0.1 *)
(* :Summary: CallingFunction[stack] yields a list of functions. The
meaning
of this list of functions is that described ahead. Let us have a function
'f', whose body contains a call to another function 'f1'. Let us suppose
that the call to 'f1', inside the body of 'f', fails due to an improper
parameter passing. This failure may typically happen in situations of
uncertainty: either the system is under construction (debugging) or in
production when the parameters are produced dynamically from inputs
inherited from other functions. To identify and locate the source of the
problem, and the point of the failure, it is necessary to know three
things:
1) the piece of code wrapping the point from where 'f1' was called when the
failure came up, 2) The name of the parent function 'f' owning the piece of
code wrapping the calling point to 'f1' where the failure came up, and 3)
the domain of valid parameters for the function 'f1'. The necessity of this
utility comes from the fact that any function may, in principle, be invoked
from many different points or functions in a network-like manner, so there
is not a way to know in advance which function is the culprit of a
particular failure.*)
(* :Keywords: Testing, debugging, messages *)
(* :Contents:*)
(* :Discussion: The present implementation is the first version and is
mainly meant to describe the purpose of the utility itself, yet it seems to
be working properly as far as it is currently used. The implementation
should probably be improved in a more efficient and elegant manner*)
(* :Sources: The function 'Domain' was taken from Ted Ersek's Tricks and
slighted modify to accept arguments of the form ' arg___' *)
(* :Credits: The function 'Domain[]' in its original form is due to Ted
Ersek as mentioned above. David Park has been helping me a lot with his
suggestions about systematic package developing *)
(* :Warning: I have not performed a systematic thoroughly test *)
BeginPackage[GeneralServices`CallingFunctionControl`]
Unprotect[Evaluate[$Context<>*]]
FunctionDomain::usage=
FunctionDomain[function] computes the valid domain of the arguments
accepted by 'function[]'.
CallingFunction::usage=
CallingFunction[stack] yields a list of functions. The meaning of this
list of functions is that described ahead. Let us have a function 'f',
whose
body contains a call to another function 'f1'. Let us suppose that the call
to 'f1', inside the body of 'f', fails due to an improper parameter
passing.
This failure may typically happen in situations of uncertainty: either the
system is under construction (debugging) or in production when the
parameters are produced dynamically from inputs inherited from other
functions. To identify and locate the source of the problem, and the point
of the failure, it is necessary to know three things: 1) the piece of code
wrapping the point from where 'f1' was called when the failure came up; 2)
The name of the parent function, 'f', owning the piece of code that wraps
the calling point to 'f1' where the failure came up; and 3) the domain of
valid parameters for the function 'f1'. The necessity of this utility comes
from the fact that any function may, in principle, be invoked from many
different points or functions in a network-like manner, so that there is
not
an easy way to know in advance which function is the culprit of a
particular
failure.
Begin[`Private`]
FunctionDomain[f_]:=Module[{result},
result=HoldForm@@{First/@DownValues[f]};
result=result/.Verbatim[HoldPattern][e_][RuleDelayed]e
]
CallingFunction[stack_]:=
Module[{downvalues,callingcontext,stp,index,functions},
downvalues=
DownValues[#]&/@Cases[Map[ToExpression,Names[Context[]<>*]],_Symbol]//
Flatten;
callingcontext = StringDrop[stack,5];
callingcontext = StringDrop[callingcontext,-1];
stp =StringPosition[#,callingcontext]&/@(ToString[#]&/@downvalues);
index =
Table[If[stp[[i]][NotEqual]{},{i},{}],{i,Length[stp]}]//Flatten;
functions =StringDrop[ ToString[First[downvalues[[#]]]],12]&/@index;
StringDrop[#,-1]&/@functions
]
End[]
Protect[Evaluate[$Context<>*]]
EndPackage[]
-----Original Message-----
===
Subject: debugging
Do you mean it wasn't an error that referenced a line number? Some
errors in Mathematica do come with line numbers; I thought you had one
of those kind and were just looking for a way to display line numbers.
You could add your own type checking/debugging code that will produce
its own warning messages. That would help you locate the problem.
I usually start from the name of the function that produces an error
and tear my function apart, call by call, until I find the step
between which the output went from good to bad.
You may find ctrl+shift+b useful for this. You may also wish to make
use of the Interrupt[] , Abort[] and Throw/Catch calls.
> There are no line numbers in the reported bugs, of course.
> > You could load the code into the vi editor (or your favorite text
> > editor) and turn on the line numbers.
> > >
> > > My question concerns debugging. I've done googling to find a quick
> > > answer but have failed. I'm developming large functions and without
> > > line numbers telling me where the errors occurred, its becoming very
> > > difficult to develop code quickly. Can anyone point me in the right
> > > direction in terms of debugging multi line (at least 50+ lines)
> > > functions?
> > >
> > > dan
> > >
> > >
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: how call a function by same name in 2 different contexts?
> I was expecting this output
> Context[FourierTransform]
> SignalProcessing`Support`SigProc`
> System`
Names[*`FourierTransform]
or
?*`FourierTransform
> So that I can see where each function is and then call any one
> I want without ambiguity.
> Any idea how to handle all of this stuff? I do not think it is nice
> that one package hides away another function of the same name in
> different package. The whole idea of using packages is so that this
> sort of thing should not happen !
I cannot see a good way to avoid this. How is Mathematica supposed to
know which function you want to use at any particular point?
What the package mechanism should ensure is that the packages aren't
confused. That is if 2 packages have the same symbol, then package A
should never use package B's symbol.
You can add new, unambiguous symbols that point to these definitions.
Something like:
SigTransForm = SignalProcessing`Support`SigProc`FourierTransform
SysTransForm = System`FourierTransform
Ofcourse, all this does is save on typing.
> Is there a way to unload a specific package after loading it without
> having to restart Mathematica to clean things out?
> nma124
Perhaps http://library.wolfram.com/infocenter/MathSource/602/
----------------------------------------------
Omega Consulting
The final answer to your Mathematica needs.
http://omegaconsultinggroup.com
===
Subject: Re: how call a function by same name in 2 different contexts?
> > I was expecting this output
> > Context[FourierTransform]
> > SignalProcessing`Support`SigProc`
> > System`
> Names[*`FourierTransform]
> or
> ?*`FourierTransform
command above does not show the full contexts:
Names[*`FourierTransform]
{FourierTransform, System`FourierTransform}
But the second one (the ?* command) does, which is
what I wanted, but I am having hard time cutting/pasting
the context string shown from that command output to
use in the notebook as it is formatted
in different font/display (bold inside a pink box) which means
I need to type the context shown by hand and make typos. I need
to figure how to make output of ?* command show up as plain
text to make it easier to copy.
> > Any idea how to handle all of this stuff? I do not think it is nice
> > that one package hides away another function of the same name in
> > different package. The whole idea of using packages is so that this
> > sort of thing should not happen !
> I cannot see a good way to avoid this. How is Mathematica supposed to
> know which function you want to use at any particular point?
By using the full contexts path to each function. I understand
this. I was asking how to find these paths, which you showed me,
But this is was only part of the question. I am now talking
about one name hiding another name.
> What the package mechanism should ensure is that the packages aren't
> confused. That is if 2 packages have the same symbol, then package A
> should never use package B's symbol.
Yes offcourse, but there is more to it.
The problem is that in Mathematica, loading package A with
function named 'foo' has hidden away function 'foo' in package B.
This is the problem
What should happen is the following:
load package A; (which contain function named 'foo')
load package B; (which also contain a function name 'foo' with same
signature as A.foo)
now if I type
foo[];
Then this is ambiguous, which 'foo' to call? A's or B's ?
the user must type
B.foo[] or A.foo[]
What Mathematica is now doing is calling B.foo becuase B
just happened to be loaded after A. This is wrong. in a later
session if I happen to load package B before A, then I will
get different results, assuming both functions 'foo' accept
same arguments, and I would have hard time figuring what
changed and why the results are now different.
In my example, foo was in the System space. I loaded package
B (which is the signal processing), then typed 'foo' (which
is the FourierTransform functin). Mathematica decided to call
foo inside the Signal processing package. This is wrong.
do you now see the problem?
Mathematica should generate an error saying the call
is ambiguous and print the different contexts leading to
a function called foo. This is what happens in languages
that support packages. This is the whole point of using
packages.
> You can add new, unambiguous symbols that point to these definitions.
> Something like:
> SigTransForm = SignalProcessing`Support`SigProc`FourierTransform
> SysTransForm = System`FourierTransform
Ok, that helps. But this does not solve core problem being
discussed.
Steve
===
Subject: Re: how call a function by same name in 2 different contexts?
> If you execute:
> Unprotect[FourierTransform];
> Remove[FourierTransform]
> that will remove the built-in version, so then if you load the package
> you should be able to see the other version.
Did you actually read my question?
Steve
===
Subject: Re: ShadowPlot3D
> Does any one knows why the function ShadowPlot3D does not give
expected
results in version 5.1 (same as in book) even if one loads the Graphics
3D packageDoes any one knows why the function ShadowPlot3D does not
give
expected results in version 5.1 (same as in book)evenif one loads the
Graphics3D package
Salut Marc,
SetOptions[ShadowPlot3D, Lighting -> False];
after loading the package will help.
Or edit the package Graphics3D.
--
Peter Pein
Berlin
===
Subject: Re: InitializationCell -> Toggle shortcut key
> mathgroup,
> i'd like to create a keyboard shortcut to toggle initialization cell
> status.
> reading through the archives and studying a post from paul hinton, i
> edit the file KeyEventTranslations.tr (deeply buried in the filesystem)
> to include the following line:
> Item[KeyEvent[i, Modifiers -> {Command, Control}], InitializationCell
> -> Toggle]
> i save, restart, and blow up. (something about a syntax error on line
> 52 of the file that reads tItem[KeyEvent[.)
> my intuition of what's wrong is that i stole the InitializationCell ->
> Toggle bit from MenuSetup.tr in the same directory and that a rule like
> that won't do as the second argument to Item[ ]; my guess is that i'll
> need a three-part FontEndToken[ ] object to pass into the second
> position of Item[ ], but i haven't a clue as to what the correct
> structure for that FrontEndToken[ ] would be.
> anyone have any suggestions on formulating an initialization cell
> toggle Item[ ] for KeyEventTranslation.tr?
I have the following code that creates a button that toggles the init
cell status on/off. Note: the code below is a snippet from notebook
that I used to create a palette that includes the Init Cell button in
question.
ButtonBox[
RowBox[{Toggle, ,
RowBox[{Init, , Cell}]}],
ButtonFunction:>FrontEndExecute[ {
FrontEnd`FrontEndToken[
SelectedNotebook[ ], InitializationCell,
Toggle]}],
ButtonEvaluator->None,
Active->True,
ButtonStyle->PaletteButton,
ButtonFrame->DialogBox]
Lee
===
Subject: Re: InitializationCell -> Toggle shortcut key
aha.
actually many of these suggestions will work, including my original
post using, simply,
InitializationCell -> Toggle
the problem was that i was editing KeyEventTranslations.tr in
*mathematica* rather than using a straight out text editor. editing in
vi suddenly made everything ok.
(isn't vi always the answer?)
also, i've come across something really interesting in this experiment.
specifically, these two key event translation items have subtly but
importantly different effects:
Item[KeyEvent[i, Modifiers -> {Command, Control}],
FrontEndExecute[{
FrontEnd`FrontEndToken[SelectedNotebook[ ],
InitializationCell, Toggle]
}]
]
and
Item[KeyEvent[i, Modifiers -> {Control, Option}],
FrontEndExecute[{
FrontEnd`FrontEndToken[SelectedNotebook[ ],
InitializationCell, Toggle]
}]
]
notice that the *only* difference is in the choice of modifier keys.
yet if you add these to your KeyEventTranslations.tr file and try them
out, you'll find that the control+option combination works *with your
cursor sitting in the middle of a cell* while the command+control works
*with an entire cell or collection of cells selected*.
actually, the two combinations compliment or complement each other
perfectly.
but how utterly bizarre that the choice of modifier key can change the
effect of the FrontEndToken.
trevor.
===
Subject: Re: InitializationCell -> Toggle shortcut key
> yet if you add these to your KeyEventTranslations.tr file and try
them
> out, you'll find that the control+option combination works *with your
> cursor sitting in the middle of a cell* while the command+control
works
> *with an entire cell or collection of cells selected*.
> actually, the two combinations compliment or complement each other
> perfectly.
> but how utterly bizarre that the choice of modifier key can change
the
> effect of the FrontEndToken.
This is true of many FrontEnd menu commands. The Option key generally
makes the command apply to the next higher level. For example, when
you have a content selection Command-1 changes the selection to Title
style (in most StyleSheets), and Command-Option-1 changes the entire
cell to Title style. This is by design.
Rob Raguet-Schofield
Wolfram Research
===
Subject: Re: InitializationCell -> Toggle shortcut key
> mathgroup,
> i'd like to create a keyboard shortcut to toggle initialization cell
> status.
> reading through the archives and studying a post from paul hinton, i
> edit the file KeyEventTranslations.tr (deeply buried in the filesystem)
> to include the following line:
> Item[KeyEvent[i, Modifiers -> {Command, Control}], InitializationCell
> -> Toggle]
> i save, restart, and blow up. (something about a syntax error on line
> 52 of the file that reads tItem[KeyEvent[.)
> my intuition of what's wrong is that i stole the InitializationCell ->
> Toggle bit from MenuSetup.tr in the same directory and that a rule like
> that won't do as the second argument to Item[ ]; my guess is that i'll
> need a three-part FontEndToken[ ] object to pass into the second
> position of Item[ ], but i haven't a clue as to what the correct
> structure for that FrontEndToken[ ] would be.
> anyone have any suggestions on formulating an initialization cell
> toggle Item[ ] for KeyEventTranslation.tr?
> trevor.
I can't tell you what front end commands to use for your problem, but I have
a suggestion. Have you thought about adding the line
Item[&Initialization Cell, InitializationCell->Toggle]
somewhere in the file PopupMenuSetup.tr. If you do so, then you can click on
the cell bracket you want to toggle, and then right click to get the popup
menu (in Windows), and then select Initialization cell.
Carl Woll
===
Subject: Re: InitializationCell -> Toggle shortcut key
The KeyEvent should be the 3rd argument to Item[], not the 1st:
Item[Initialization Cell, InitializationCell->Toggle, KeyEvent[i,
Modifiers->{Command,Control}]]
-Rob
===
Subject: Re: InitializationCell -> Toggle shortcut key
> The KeyEvent should be the 3rd argument to Item[], not the 1st:
I'm sorry, I misread your question. I mistakenly though you were
referring to MenuSetup.tr instead of KeyEventTranslations.tr. If
you're still having trouble with KeyEventTranslations.tr you could try
MenuSetup.tr (or PopupMenuSetup.tr as someone else already mentioned).
-Rob
===
Subject: Re: OK, but ZTransform[Sin[5
n],n,z,] hangs?
> Hello;
> This is Mathematica 5.1 on windows.
> This is a strange one.
> When I start Mathematica and type
> ZTarnsform[Sin[4 n],n,z]
> Then I get the answer.
> but if replace '4' by '5' above, I see Mathematica starts
> to consume huge amount of memory untill the PC hangs running
> ZTarnsform[Sin[5 n],n,z] <---- hangs
> ZTarnsform[Sin[4 n],n,z] <---- ok
> any one has any idea what is going on? is this some subtle
> mathemtical thing I am overlooking that is causing this?
> what is so special about '5' here?
> Steve
Hello Steve,
This appears to have been caused by a hang in Limit (which is used by
ZTransform for convergence testing), when we use Sin[5 n] instead of
Sin[4 n] in ZTransform, in Mathematica 5.
As a workaround, you could try applying TrigToExp in the example that
hangs, as shown below.
In[1]:= $Version
Out[1]= 5.1 for Linux (February 20, 2005)
In[2]:= ZTransform[TrigToExp[Sin[5 n]], n, z] // Together//InputForm
Out[2]//InputForm=
((I/2)*(-1 + E^(10*I))*z)/(-E^(5*I) + z + E^(10*I)*z - E^(5*I)*z^2)
Sorry for the inconvenience caused by this problem.
Devendra Kapadia.
Wolfram Research, Inc.
===
Subject: Re: ZTarnsform[Sin[4 n],n,z] OK, but ZTransform[Sin[5 n],n,z,]
hangs?
> Hello;
> This is Mathematica 5.1 on windows.
> This is a strange one.
> When I start Mathematica and type
> ZTarnsform[Sin[4 n],n,z]
> Then I get the answer.
> but if replace '4' by '5' above, I see Mathematica starts
> to consume huge amount of memory untill the PC hangs running
> ZTarnsform[Sin[5 n],n,z] <---- hangs
> ZTarnsform[Sin[4 n],n,z] <---- ok
> any one has any idea what is going on? is this some subtle
> mathemtical thing I am overlooking that is causing this?
> what is so special about '5' here?
> Steve
Hi Steve,
I don't know, what happens in the kernel, but
ZTransform[Sin[a n], n, z] /. a -> 5
gives a quick answer.
--
Peter Pein
Berlin
===
Subject: Re: ZTarnsform[Sin[4 n],n,z] OK, but ZTransform[Sin[5 n],n,z,]
hangs?
Steve, yes, indeed strange. must be some unfortunate limitation (Error?) in
an internal evaluation. Possibly algebraic explosion. The same problem
occurs for other Integers and reals, nothing special about '5'. Other
manifestations of problems abound:
ZTransform[ Sin[2 Pi n], n, z] -> 0 (* ok*)
ZTransform[ Sin[8 Pi n], n, z] (* had to abort *)
What is also strange to me is the returned result for
ZTransform[ Sin[ 1.5 n], n, z]
which comes back in terms of a rational representation for 1.5. I may be
wrong, but I do not recall Mathematica returning an exact result given an
approximate input?
Mariusz
Hello;
This is Mathematica 5.1 on windows.
This is a strange one.
When I start Mathematica and type
ZTarnsform[Sin[4 n],n,z]
Then I get the answer.
but if replace '4' by '5' above, I see Mathematica starts
to consume huge amount of memory untill the PC hangs running
ZTarnsform[Sin[5 n],n,z] <---- hangs
ZTarnsform[Sin[4 n],n,z] <---- ok
any one has any idea what is going on? is this some subtle
mathemtical thing I am overlooking that is causing this?
what is so special about '5' here?
Steve
===
Subject: Re: Controlled evaluation of functions
Brett,
The reason Sin doesn't further evaluate is that you have exact expressions
and Mathematica knows no further rules for it. But make i in your Table an
approximate number and Sin will evaluate and behave just like your k.
So if you don't want k to evaluate, don't give it a definition! Or rather,
make it a rule and then use the rule when you want evaluation. And in
general, if you want controlled evaluation use rules instead of
definitions.
krule = k -> (#^2 &);
f[i_, x_] := k[i x]
g[x_] = Table[f[i, x], {i, 3}]
{k[x], k[2 x], k[3 x]}
{3, 0, 1}.g[y]
% /. krule
3 k[y] + k[3 y]
12*y^2
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
Consider the following behaviour:
In[1]:= f[i_, x_] := Sin[i x]
In[2]:= g[x_] = Table[f[i, x], {i, 3}]
Out[2]= {Sin[x], Sin[2 x], Sin[3 x]}
In[3]:= {3, 0, 1} . g[y]
Out[3]= 3 Sin[y] + Sin[3 y]
This is what I want to do, but using my own function instead of Sin.
However, this is the result:
In[4]:= k[x_] := x^2 (* This is my alternative to Sin *)
In[5]:= f[i_, x_] := k[i x]
In[6]:= g[x_] = Table[f[i, x], {i, 3}]
Out[6]= {x^2, 4 x^2, 9 x^2} (* I want {k[x], k[2 x], k[3 x]} *)
In[7]:= {3, 0, 1} . g[y]
Out[7]= 12 y^2 (* I want 3 k[y] + k[3 y] *)
How can I get the function k to behave like Sin, so that it is not
evaluated?
Note that in my real application, k is a lot more complex and has
conditions on its arguments, etc.
Brett Patterson
===
Subject: Re: Variant of inner Product ...
Just realized I read your question incorrectly... once I understood; I
realized that my solution puts you back at square one... Hartmut
Wolf's solution seems to do the trick though
> >-----Original Message-----
> >[mailto:=?ISO-8859-1?Q?Detlef_M=FCller?=@smc.vnet.net]
===
> >Subject: Variant of inner Product ...
> >I have the following to do:
> >Given
> >In[1]:= A={1,2,3}; B={{a,b},{c,d},{r,s}};
> >And a Function f, I like to have
> >Out[2] = {f[1,a],f[1,b]}+{f[2,c],f[2,d]}+{f[3,r],f[3,s]}
> >The trial
> >In[8]:=A={1,2,3}; B={{a,b},{c,d,e},{r,s}};
> >In[9]:= Inner[f,A,B]
> >Out[9]= f[1,{a,b}]+f[2,{c,d,e}]+f[3,{r,s}]
> >looks promising,
> >but if the Lists in B have the same length, Inner
> >makes something different:
> >In[15]:=
> >A={1,2,3}; B={{a,b},{c,d},{r,s}}; Inner[f,A,B]
> >Out[16]= {f[1,a]+f[2,c]+f[3,r],f[1,b]+f[2,d]+f[3,s]}
> >So for now I have an ugly Table-Construction doing the job,
> >but I can't imagine there is no elegant and clear solution
> >for this ... any suggestions?
> > Detlef
> Detlef,
> your desired expression will be reduced further, because Plus ist
Listable. Such I'll show it with CirclePlus:
> Observe:
> In[38]:= Inner[f, A, B, g]
> Out[38]=
> {g[f[1, a], f[2, c], f[3, r]], g[f[1, b], f[2, d], f[3, s]]}
> Such just use the right stuff for g:
> In[39]:= CirclePlus @@ Transpose@Inner[f, A, B, List]
> Out[39]=
> {f[1, a], f[1, b]}[CirclePlus]{f[2, c], f[2, d]}[CirclePlus]{f[3, r],
> f[3, s]}
> Perhaps this appears simpler (?):
> In[40]:= Thread[f[A, B]]
> Out[40]= {f[1, {a, b}], f[2, {c, d}], f[3, {r, s}]}
> In[41]:= CirclePlus @@ Thread /@ Thread[f[A, B]]
> Out[41]=
> {f[1, a], f[1, b]}[CirclePlus]{f[2, c], f[2, d]}[CirclePlus]{f[3, r],
> f[3, s]}
> --
> Hartmut Wolf
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Controlled evaluation of functions
you can always use an undefined function that will serve as a dummy
head for your expressions and then use replacement rule to replace
this dummy head with your true function.
k[x_] := x^2; (*you original function*)
f[i_, x_] := dh[i x]; (* dh stands for Dummy Head *)
g[x_]:= Table[f[i, x], {i, 3}]; (*use SetDelayed in your code and not Set
*)
Then
g[y] returns
{dh[y], dh[2 y], dh[3 y]}
and
{3, 0, 1}.g[y] returns
3 dh[y] + dh[3 y]
No its time to use pattern matching (or just replacing heads)
first option
g[x]/.dh[x_]->k[x]
second option
g[x]/.dh->k
third option
Map[Apply[k,#]&,g[x]]
or shortly
Apply[k,#]&/@g[x]
yehuda
> Consider the following behaviour:
> In[1]:= f[i_, x_] := Sin[i x]
> In[2]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[2]= {Sin[x], Sin[2 x], Sin[3 x]}
> In[3]:= {3, 0, 1} . g[y]
> Out[3]= 3 Sin[y] + Sin[3 y]
> This is what I want to do, but using my own function instead of Sin.
> However, this is the result:
> In[4]:= k[x_] := x^2 (* This is my alternative to Sin *)
> In[5]:= f[i_, x_] := k[i x]
> In[6]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[6]= {x^2, 4 x^2, 9 x^2} (* I want {k[x], k[2 x], k[3 x]} *)
> In[7]:= {3, 0, 1} . g[y]
> Out[7]= 12 y^2 (* I want 3 k[y] + k[3 y] *)
> How can I get the function k to behave like Sin, so that it is not
> evaluated?
> Note that in my real application, k is a lot more complex and has
> conditions on its arguments, etc.
> Brett Patterson
===
Subject: Re: Controlled evaluation of functions
Use
myrule=k[x_]->x^2
Then whenever you decide to evaluate k, append
/.myrule
to the end of the statement.
> Consider the following behaviour:
> In[1]:= f[i_, x_] := Sin[i x]
> In[2]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[2]= {Sin[x], Sin[2 x], Sin[3 x]}
> In[3]:= {3, 0, 1} . g[y]
> Out[3]= 3 Sin[y] + Sin[3 y]
> This is what I want to do, but using my own function instead of Sin.
> However, this is the result:
> In[4]:= k[x_] := x^2 (* This is my alternative to Sin *)
> In[5]:= f[i_, x_] := k[i x]
> In[6]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[6]= {x^2, 4 x^2, 9 x^2} (* I want {k[x], k[2 x], k[3 x]} *)
> In[7]:= {3, 0, 1} . g[y]
> Out[7]= 12 y^2 (* I want 3 k[y] + k[3 y] *)
> How can I get the function k to behave like Sin, so that it is not
> evaluated?
> Note that in my real application, k is a lot more complex and has
> conditions on its arguments, etc.
> Brett Patterson
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Controlled evaluation of functions
> Consider the following behaviour:
> In[1]:= f[i_, x_] := Sin[i x]
> In[2]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[2]= {Sin[x], Sin[2 x], Sin[3 x]}
> In[3]:= {3, 0, 1} . g[y]
> Out[3]= 3 Sin[y] + Sin[3 y]
> This is what I want to do, but using my own function instead of Sin.
> However, this is the result:
> In[4]:= k[x_] := x^2 (* This is my alternative to Sin *)
> In[5]:= f[i_, x_] := k[i x]
> In[6]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[6]= {x^2, 4 x^2, 9 x^2} (* I want {k[x], k[2 x], k[3 x]} *)
> In[7]:= {3, 0, 1} . g[y]
> Out[7]= 12 y^2 (* I want 3 k[y] + k[3 y] *)
> How can I get the function k to behave like Sin, so that it is not
> evaluated?
> Note that in my real application, k is a lot more complex and has
> conditions on its arguments, etc.
> Brett Patterson
there are at least 2 possibilities:
a) leave k undefined
b) k[x_?NumericQ]:=x^2 evaluate for numeric arguments only
--
Peter Pein
Berlin
===
Subject: Re: Controlled evaluation of functions
Clear[f, k]
k[x_]:=x^2
f[1,x_]:=HoldForm@k[x]
f[i_,x_]:=HoldForm@k[i x]
g[x_]=Table[f[i,x],{i,3}]
{k[x],k[2 x],k[3 x]}
{3,0,1}.g[y]
ReleaseHold@%
3 k[y]+k[3 y]
12*y^2
Bobby
> Consider the following behaviour:
> In[1]:= f[i_, x_] := Sin[i x]
> In[2]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[2]= {Sin[x], Sin[2 x], Sin[3 x]}
> In[3]:= {3, 0, 1} . g[y]
> Out[3]= 3 Sin[y] + Sin[3 y]
> This is what I want to do, but using my own function instead of Sin.
> However, this is the result:
> In[4]:= k[x_] := x^2 (* This is my alternative to Sin *)
> In[5]:= f[i_, x_] := k[i x]
> In[6]:= g[x_] = Table[f[i, x], {i, 3}]
> Out[6]= {x^2, 4 x^2, 9 x^2} (* I want {k[x], k[2 x], k[3 x]} *)
> In[7]:= {3, 0, 1} . g[y]
> Out[7]= 12 y^2 (* I want 3 k[y] + k[3 y] *)
> How can I get the function k to behave like Sin, so that it is not
> evaluated?
> Note that in my real application, k is a lot more complex and has
> conditions on its arguments, etc.
> Brett Patterson
--
DrBob@bigfoot.com
===
Subject: Re: hangs?
Same behavior on 5.0 Windows XP SP2
I suppose it would get an answer eventually
> Hello;
> This is Mathematica 5.1 on windows.
> This is a strange one.
> When I start Mathematica and type
> ZTarnsform[Sin[4 n],n,z]
> Then I get the answer.
> but if replace '4' by '5' above, I see Mathematica starts
> to consume huge amount of memory untill the PC hangs running
> ZTarnsform[Sin[5 n],n,z] <---- hangs
> ZTarnsform[Sin[4 n],n,z] <---- ok
> any one has any idea what is going on? is this some subtle
> mathemtical thing I am overlooking that is causing this?
> what is so special about '5' here?
> Steve
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Plot axis numbers
Show[Graphics[
First[FullGraphics@
Block[{$DisplayFunction=Identity},
Plot[Sin[0.2 x],{x,-10,10}]]]/.Text[0.5`,blah__]->
Text[0.50,blah]],PlotRange->All]
or maybe
Show[Graphics[
First[FullGraphics@
Block[{$DisplayFunction=Identity},
Plot[Sin[0.2 x],{x,-10,10}]]]/.Text[x_,blah__]:>
Text[SetPrecision[x,2],blah]],PlotRange->All]
> In short:
> How can I ensure that all numbers on the plot axes have the same number
> of digits?
> If e.g. the y-axis in a plot goes from 0 to 1 in steps of 0.25, then the
> axis numbers will be e.g 0.25 and 0.5. I want this to be 0.25 and
> 0.50. Any suggestions other than manually adjusting the numbers with the
> FrameTicks option?
> - T.M.S¿rensen
> (Mathematica 5)
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Plot axis numbers
Torquil,
I'm not certain how you got a plot that had those particular y axis tick
labels. In any case, you would have to fiddle with FrameTicks to obtain the
formatting you want for the numbers.
Here is a similar example.
nformat = NumberForm[#, {3, 2}, NumberPadding -> {, 0}] &;
Plot[Sqrt[x], {x, 0, 2.5},
Frame -> True,
FrameTicks -> {Table[{x, nformat[x]}, {x, 0, 2.5, 0.5}],
Table[{y, nformat[y]}, {y, 0, 1.5, 0.25}], None, None},
ImageSize -> 450];
The only trouble with this is that the small ticks are lost and there are
no
tick marks on the upper and right hand side of the frame. It would take
much
more work to get all of those correctly.
With DrawGraphics from my web site below, you can use the CustomTicks
function to generate the tick marks. (You could even make the tick values a
function of the actual plot positions.) Here is the same example with small
ticks and unlabeled ticks on the top and right.
nformat = NumberForm[#, {3, 2}, NumberPadding -> {, 0}] &;
Draw2D[
{Draw[Sqrt[x], {x, 0, 2.5}]},
Frame -> True,
FrameTicks ->
{CustomTicks[Identity, {0, 2.5, 0.5, 5}, CTNumberFunction -> nformat],
CustomTicks[Identity, {0, 1.5, 0.25, 5}, CTNumberFunction ->
nformat],
CustomTicks[Identity, {0, 2.5, 0.5, 5}, CTNumberFunction -> (
&)],
CustomTicks[Identity, {0, 1.5, 0.25, 5}, CTNumberFunction -> (
&)]},
Background -> Linen,
ImageSize -> 450];
The number function & simply eliminates the display of the tick value.
The
lists in CustomTicks are {starting value, ending value, increment, number
of
intervals for small ticks}.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
In short:
How can I ensure that all numbers on the plot axes have the same number
of digits?
If e.g. the y-axis in a plot goes from 0 to 1 in steps of 0.25, then the
axis numbers will be e.g 0.25 and 0.5. I want this to be 0.25 and
0.50. Any suggestions other than manually adjusting the numbers with the
FrameTicks option?
- T.M.S¿rensen
(Mathematica 5)
===
Subject: Re: Bug in Integrate in Version 5.1?
There is more peculiar result there:
Try the indefinite integral
a=Integrate[-3 (x^2 )Log[1 - Exp[-x]], x]
to get
-3*(x^4/12 + (1/3)*x^3*Log[1 - E^(-x)] +
(1/3)*((-x^3)*Log[1 - E^x] -
3*x^2*PolyLog[2, E^x] + 6*x*PolyLog[3, E^x] -
6*PolyLog[4, E^x]))
this expression does not have value in both x=0 and x-> infinity but
it does converge to a limit for both
So
Limit[a,x->Infinity]-Limit[a,x->0] does give the exact resut (i.e.,
Pi^4/15)
I cannot figure out why using the definite integral does not return
the true value.
yehuda
> Integrate gives the following answer for this integral:
> a = Integrate[x^3 /(Exp[x] - 1), {x, 0, Infinity}]
> N[a]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> which I think is correct.
> This integral, which should be the same ( by partial integration),
> gives:
> b = Integrate[-3 x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[b]
> Out[3]= (11*Pi^4)/60
> Out[4]= 17.8583
> while numerical integration gives:
> NIntegrate[-3x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> Out[5]= 6.49394
> This is done with version 5.1.
> Version 4.2 gives
> c=Integrate[-3*x^2*Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[c]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> (Remarkably version 4.2. complaints: Series::esss: Essential
> singularity
> encountered in ... while calculating the correct result. )
> So the result in version 5.1. looks wrong.
> Or did I make a mistake?
> Alexander
===
Subject: Re: Bug in Integrate in Version 5.1?
Maybe they smoked some of their sums and improper definite integrals
with that change in the output of the limit command. A hunch from a
previous thread:
(The integration is correct in 5.0 (happy happy joy joy), so I don't
know if this will work for you)
Limit[Integrate[-3*x^2*Log[1-Exp[-x]],{x,a,Infinity}],a->0]
> Integrate gives the following answer for this integral:
> a = Integrate[x^3 /(Exp[x] - 1), {x, 0, Infinity}]
> N[a]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> which I think is correct.
> This integral, which should be the same ( by partial integration),
> gives:
> b = Integrate[-3 x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[b]
> Out[3]= (11*Pi^4)/60
> Out[4]= 17.8583
> while numerical integration gives:
> NIntegrate[-3x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> Out[5]= 6.49394
> This is done with version 5.1.
> Version 4.2 gives
> c=Integrate[-3*x^2*Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[c]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> (Remarkably version 4.2. complaints: Series::esss: Essential
> singularity
> encountered in ... while calculating the correct result. )
> So the result in version 5.1. looks wrong.
> Or did I make a mistake?
> Alexander
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Bug in Integrate in Version 5.1?
> Integrate gives the following answer for this integral:
> a = Integrate[x^3 /(Exp[x] - 1), {x, 0, Infinity}]
> N[a]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> which I think is correct.
> This integral, which should be the same ( by partial integration),
> gives:
> b = Integrate[-3 x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[b]
> Out[3]= (11*Pi^4)/60
> Out[4]= 17.8583
> while numerical integration gives:
> NIntegrate[-3x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> Out[5]= 6.49394
> This is done with version 5.1.
> Version 4.2 gives
> c=Integrate[-3*x^2*Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[c]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> (Remarkably version 4.2. complaints: Series::esss: Essential
> singularity
> encountered in ... while calculating the correct result. )
> So the result in version 5.1. looks wrong.
> Or did I make a mistake?
> Alexander
And to make things more confusing (in 5.1):
ival = Integrate[-3*x^2*Log[1 - Exp[-x]], {x, x0, z}];
Simplify[Subtract @@ (Limit[ival, z -> #1] & ) /@ {Infinity, 0}]
Pi^4/15
Mathematica fails to use it's capability to calculate the correct answer?
And all of these give Pi^4/15 too:
the b from above in other form:
Integrate[-3*x^2*(Log[Exp[x] - 1] - x), {x, 0, Infinity}]
after integrating by parts again:
Integrate[6*x*PolyLog[2, Exp[-x]], {x, 0, Infinity}]
and int. by parts again:
-6*Integrate[PolyLog[2, Exp[-x]], {x, 0, Infinity}, x]
--
Peter Pein
Berlin
===
Subject: Re: Bug in Integrate in Version 5.1?
Here is a workaround
$Version
5.1 for Mac OS X (January 27, 2005)
a=Integrate[x^3/(Exp[x]-1),{x,0,Infinity}]
Pi^4/15
b=Limit[
Integrate[-3 x^2 Log[1-Exp[-x]],{x,0,x1}],
x1->Infinity]
Pi^4/15
Bob Hanlon
===
> Subject: Bug in Integrate in Version 5.1?
> Integrate gives the following answer for this integral:
> a = Integrate[x^3 /(Exp[x] - 1), {x, 0, Infinity}]
> N[a]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> which I think is correct.
> This integral, which should be the same ( by partial integration),
> gives:
> b = Integrate[-3 x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[b]
> Out[3]= (11*Pi^4)/60
> Out[4]= 17.8583
> while numerical integration gives:
> NIntegrate[-3x^2 Log[1 - Exp[-x]], {x, 0, Infinity}]
> Out[5]= 6.49394
> This is done with version 5.1.
> Version 4.2 gives
> c=Integrate[-3*x^2*Log[1 - Exp[-x]], {x, 0, Infinity}]
> N[c]
> Out[1]= Pi^4/15
> Out[2]= 6.49394
> (Remarkably version 4.2. complaints: Series::esss: Essential
> singularity
> encountered in ... while calculating the correct result. )
> So the result in version 5.1. looks wrong.
> Or did I make a mistake?
> Alexander
===
Subject: Re: managing order of magnitude instead of numbers
> i need to make calculation without specifing the exact values of my
> parameters, i want only to specify their order of magnitude.
> obviously i need only order of magnitudes as result.
> now the problem is that
> O(1) - O(1) = O(1)
> while
> 1-1 = 0
> so it's clear i cannot use numbers to make this order of magintude
> calculation
One comment here: Mathematica does know something about objects of this
type. For example, enter
O[x] - O[x]
and see what happens.
Generally, I think you can use Series to do such calculations most
efficiently.
> at the present stage i let mathematica do the calculation in a fully
> simbolic way and then, by hand calculation, i get my result by
> susbstitution of the order of magnitudes in place of the symobls used.
> making a very simple example:
> i ask to mathematica to do
> a - b
> and then i substitute
> a=O(1)
> b=O(0.002)
> and calculate
> a - b = O(1)
> in this way i'm making operation between magnitues not values, but i
> have to do it on my own, while doing calculation with mathematica will
> be much better.
> is there any way to make calculation betwwen order of magnitues
> instead of between numbers?
Perhaps you could use Interval arithmetic? For example, defining
o[x_] := Interval[x {0.9,1.1}]
to be an order of magnitude interval, then
o[1] - o[0.002]
returns an interval. We can use the following to check the magnitude:
oCheck[int_,n_] := IntervalIntersection[int, o[n]] =!= Interval[]
For example,
oCheck[o[1] - o[0.002], 1]
yields True, whereas
oCheck[o[1] - o[0.002],0.1]
and
oCheck[o[1] - o[0.002], 10]
do not.
Paul
--
Paul Abbott Phone: +61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul
http://InternationalMathematicaSymposium.org/IMS2005/
===
Subject: Re: Re: nesting while function
In[1]:=
t = 0; i = 1;
While[i <= 3,
j=1;
While[j <= 2,
t = t + 1;
j = j + 1];
i = i + 1]
t
Out[3]=
6
> > Hi
> > t = 0; i = 1; j = 1;
> > While[i <= 3,
> j=1;
> > While[j <= 2,
> > t = t + 1;
> > j = j + 1];
> > i = i + 1]
> > give the result for t=2 and not 6 as should be
> > but
> > t = 0; Do[
> > Do[
> > t = t + 1;
> > , {j, 2}];
> > , {i, 3}]
> > give the value of 6 for t
> > is this a wrong behaviour for the While function ? or i miss
something.
===
Subject: Re: nesting while function
> Hi
> t = 0; i = 1; j = 1;
> While[i <= 3,
j=1;
> While[j <= 2,
> t = t + 1;
> j = j + 1];
> i = i + 1]
> give the result for t=2 and not 6 as should be
> but
> t = 0; Do[
> Do[
> t = t + 1;
> , {j, 2}];
> , {i, 3}]
> give the value of 6 for t
> is this a wrong behaviour for the While function ? or i miss
something.
--
Peter Pein
Berlin
===
Subject: Re: nesting while function
You forgot to initialize j in the outer loop:
t = 0; i = 1;
While[i .89ÅÛ 3,
j = 1;
While[j .89ÅÛ 2, t++; j++];
i++]
t
6
Bobby
> Hi
> t = 0; i = 1; j = 1;
> While[i <= 3,
> While[j <= 2,
> t = t + 1;
> j = j + 1];
> i = i + 1]
> give the result for t=2 and not 6 as should be
> but
> t = 0; Do[
> Do[
> t = t + 1;
> , {j, 2}];
> , {i, 3}]
> give the value of 6 for t
> is this a wrong behaviour for the While function ? or i miss
something.
--
DrBob@bigfoot.com
===
Subject: Re: nesting while function
Marloo,
j=1 is in the wrong place. I would try:
In[33]:=
t = 0; i = 1;
While[i <= 3, j = 1;
While[j <= 2, t++; j++; ];
i++; ]
In[35]:=
t
Out[35]=
6
> Hi
> t = 0; i = 1; j = 1;
> While[i <= 3,
> While[j <= 2,
> t = t + 1;
> j = j + 1];
> i = i + 1]
> give the result for t=2 and not 6 as should be
> but
> t = 0; Do[
> Do[
> t = t + 1;
> , {j, 2}];
> , {i, 3}]
> give the value of 6 for t
> is this a wrong behaviour for the While function ? or i miss
> something.
===
Subject: Re: nesting while function
> Hi
> t = 0; i = 1; j = 1;
> While[i <= 3,
> While[j <= 2,
> t = t + 1;
> j = j + 1];
> i = i + 1]
> give the result for t=2 and not 6 as should be
> but
> t = 0; Do[
> Do[
> t = t + 1;
> , {j, 2}];
> , {i, 3}]
> give the value of 6 for t
> is this a wrong behaviour for the While function ? or i miss
> something.
You need to reset j to 0 before starting your inner While loop.
Ssezi
===
Subject: Re: nesting while function
You need to reset j within the outer loop
t=0;
i=1;
While[i<=3,
j=1;
While[j<=2,
t=t+1;
j=j+1];
i=i+1]; t
6
Bob Hanlon
===
> Subject: nesting while function
> Hi
> t = 0; i = 1; j = 1;
> While[i <= 3,
> While[j <= 2,
> t = t + 1;
> j = j + 1];
> i = i + 1]
> give the result for t=2 and not 6 as should be
> but
> t = 0; Do[
> Do[
> t = t + 1;
> , {j, 2}];
> , {i, 3}]
> give the value of 6 for t
> is this a wrong behaviour for the While function ? or i miss
something.
===
Subject: System for Mathematica
I want to announce,that I put a new version of my oo system (version 2)
into
the web under:
www.schmitther.de
The new oo system contains the following changes:
- the programs @, self , and super were changed
- class arrays in objects were implemented
- the use of blocks in the oo system were described
Hermann Schmitt
===
Subject: Re: New Article : ShadowPlot3D
If you could give a simple example that doesn't work for you, you would
probably get a response.
Convert the code to InputForm and copy it to the posting.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
does anyone have trouble with the graphics3d package on version 5,1 and
specifically with the ShadowPlot3Dfunction?is ther a solution for it to
work
properly?Marc
===
Subject: Mathematica and RC circuits
I am trying to use Mathematica 5.1.1 to solve an RC circuit, and I am
having problems. When ever I use I or i for the current the program
keeps running untill it close down. Then other problem, how do I set
up Mathematica to solve this equation for dvdt, i = v/r + c dv/dr.
Then I want to plot this equation based on diferent current imputs.
===
Subject: Re: Plot axis numbers
numbers. I decided to use the CustomTicks package, which worked great.
Now the plots in my thesis look much better :-)
Torquil
> Hi Torquil,
>>How can I ensure that all numbers on the plot axes have the
>>same number of digits?
>>If e.g. the y-axis in a plot goes from 0 to 1 in steps of
>>0.25, then the
>> axis numbers will be e.g 0.25 and 0.5. I want this to be
>>0.25 and 0.50. Any suggestions other than manually adjusting
>>the numbers with the FrameTicks option?
> I recommend Mark Caprio's CustomTicks package which can be found on
> MathSource.
> Use it as you would FrameTicks (or AxisTicks).
> Dave.
===
Subject: Re: Plot axis numbers
Hi Torquil,
> How can I ensure that all numbers on the plot axes have the
> same number of digits?
> If e.g. the y-axis in a plot goes from 0 to 1 in steps of
> 0.25, then the
> axis numbers will be e.g 0.25 and 0.5. I want this to be
> 0.25 and 0.50. Any suggestions other than manually adjusting
> the numbers with the FrameTicks option?
I recommend Mark Caprio's CustomTicks package which can be found on
MathSource.
Use it as you would FrameTicks (or AxisTicks).
Dave.
===
Subject: keyboard shorcut smorgasbord
mathgroup,
i've been mucking around in KeyEventTranslations.tr setting up keyboard
shortcuts recently and i've learned that front end tokens are pretty
cool. the wolfram page
http://documents.wolfram.com/v5/FrontEnd/FrontEndTokens/
gives pretty good docs.
anyway, here are three that i had to work a little to get going (with
good help from the mathgroup, as always).
* for a quit kernel keyboard shortcut, add the following to
KeyEventTranslations.tr:
Item[KeyEvent[q, Modifiers -> {Control, Option}],
FrontEndExecute[
FrontEndToken[
SelectedNotebook[ ],
EvaluatorQuit,
Automatic
]
]
]
* for an initialization cell toggle keyboard shortcut (repeated from
previous thread), add the following to KeyEventTranslations.tr:
Item[KeyEvent[i, Modifiers -> {Command, Control}],
FrontEndExecute[
FrontEndToken[
SelectedNotebook[ ],
InitializationCell,
Toggle
]
]
]
* for a save as package keyboard shortcut, add the following to
KeyEventTranslations.tr:
Item[KeyEvent[k, Modifiers -> {Control, Option}],
SaveRenameSpecial[Package]]
trevor.
===
Subject: Re: how call a function by same name in 2 different contexts?
>>Is there a way to unload a specific package after loading it
>>without having to restart Mathematica to clean things out?
>Perhaps http://library.wolfram.com/infocenter/MathSource/602/
I think a better choice would be the CleanSlate package found at
http://library.wolfram.com/infocenter/MathSource/4718/
--
To reply via email subtract one hundred and four
===
Subject: How to quickly find number of non-zero elements in sparse matrix
rows?
I have a sparse matrix, roughly 200k by 200k, with about .01% of the
entries non zero, represented with SparseArray. I'd like to reasonably
efficiently generate a 200k-long list where each element is the number of
non-zero entries in the corresponding row of the matrix. I haven't been
able to figure out a quick way to do this.
One approach I've tried is the following (using a made up SparseArray of
an identity matrix to illustrate the point):
In[76]:= sa = SparseArray[Table[{i,i}[Rule]1, {i, 200000}]];
In[77]:= rowLen[sa_, r_] := Length[ArrayRules[Take[sa, {r}]]]-1
However, it's quite slow--about 1/10 of a second for each value computed
(on a 1GHz Mac G4)
In[80]:= Table[rowLen[sa,i], {i,100}] // Timing
Out[80]=
{12.4165
Second,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
I've got to assume that there's an efficient way to do this--does anyone
have any suggestions?
-matt
--
Matt Pharr matt@pharr.org
In a cruel and evil world, being cynical can allow you to get some
entertainment out of it. --Daniel Waters
===
Subject: Re: Re: How to quickly find number of non-zero elements in sparse
matrix rows?
Paul,
Your function is really quick. I wish I understood the way the Split
commands works. Unfortunately, your code does not work, in its present
form, if the array has nonzero rows. Here is a test case that is
2020*2020
sa=SparseArray[Table[{i+20,i+20}[Rule]1,{i,200*10}]];
The first 20 rows will be blank.
With that as input, your code produces a list of length 2000, with all
1s as output.
My code does produce the correct answer. Unfortunately, with my
algorithm, the number of nonzero entries (d) of the sa matrix is a
power regression function of the time (t) required to compute the
number of nonzero items in each row.
d==a*t^b
On my computer, {a->2924.8,b->0.434112}. The ~4 million nonzero
entries in Pharr's matrix would take around 6.5 months to classify
using my procedure... lol.
> > I have a sparse matrix, roughly 200k by 200k, with about .01% of the
> > entries non zero, represented with SparseArray. I'd like to reasonably
> > efficiently generate a 200k-long list where each element is the number
of
> > non-zero entries in the corresponding row of the matrix. I haven't
been
> > able to figure out a quick way to do this.
> > One approach I've tried is the following (using a made up SparseArray
of
> > an identity matrix to illustrate the point):
> > In[76]:= sa = SparseArray[Table[{i,i}[Rule]1, {i, 200000}]];
> Alternatively,
> sa = SparseArray[{{i_, i_} -> 1}, {200000, 200000}]
> > In[77]:= rowLen[sa_, r_] := Length[ArrayRules[Take[sa, {r}]]]-1
> > However, it's quite slow--about 1/10 of a second for each value
computed
> > (on a 1GHz Mac G4)
> > In[80]:= Table[rowLen[sa,i], {i,100}] // Timing
> > Out[80]=
> > {12.4165
Second,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> >
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> > 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
> > I've got to assume that there's an efficient way to do this--does
anyone
> > have any suggestions?
> Here is one suggestion: For the first 100 entries, the following is
> essentially instantaneous (only one call to ArrayRules).
> Most[Length /@ Split[ArrayRules[sa[[Range[100]]]],
> #1[[1,1]] == #2[[1,1]] & ]] // Timing
> The test #1[[1,1]] == #2[[1,1]] & is used by Split to start a new List
> whenever the row index changes. Most is required to drop the default
> case, {_, _} -> 0.
> For the full array, the following is quite reasonable:
> Most[Length /@ Split[ArrayRules[sa], #1[[1,1]]==#2[[1,1]]&]]; //Timing
> In the advanced documentation for Linear Algebra -- there is a link to
> this from the Help Browser entry for SparseArray -- you will find that
> SparseArray uses the Compressed Sparse Row (CSR) format as an internal
> storage format. You can view this information using InputForm. For
> example,
> SparseArray[{{1, 1} -> 1, {2, 3} -> 2, {2, 2} -> 1}, {3, 4}]] //
> InputForm
> The 4-th part of the data structure records the cumulative sum of
> non-zero entries. However, there seems to be no simple way to access
> this information. Usually, for special formats you can use Part to
> extract such information but Part is interpreted by SparseArray,
> circumventing this.
> Paul
> --
> Paul Abbott Phone: +61 8 6488 2734
> School of Physics, M013 Fax: +61 8 6488 1014
> The University of Western Australia (CRICOS Provider No 00126G)
> AUSTRALIA http://physics.uwa.edu.au/~paul
> http://InternationalMathematicaSymposium.org/IMS2005/
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: How to quickly find number of non-zero elements in sparse
matrix rows?
> I have a sparse matrix, roughly 200k by 200k, with about .01% of the
> entries non zero, represented with SparseArray. I'd like to reasonably
> efficiently generate a 200k-long list where each element is the number of
> non-zero entries in the corresponding row of the matrix. I haven't been
> able to figure out a quick way to do this.
> One approach I've tried is the following (using a made up SparseArray of
> an identity matrix to illustrate the point):
> In[76]:= sa = SparseArray[Table[{i,i}[Rule]1, {i, 200000}]];
Alternatively,
sa = SparseArray[{{i_, i_} -> 1}, {200000, 200000}]
> In[77]:= rowLen[sa_, r_] := Length[ArrayRules[Take[sa, {r}]]]-1
> However, it's quite slow--about 1/10 of a second for each value computed
> (on a 1GHz Mac G4)
> In[80]:= Table[rowLen[sa,i], {i,100}] // Timing
> Out[80]=
> {12.4165
Second,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
>
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
> I've got to assume that there's an efficient way to do this--does anyone
> have any suggestions?
Here is one suggestion: For the first 100 entries, the following is
essentially instantaneous (only one call to ArrayRules).
Most[Length /@ Split[ArrayRules[sa[[Range[100]]]],
#1[[1,1]] == #2[[1,1]] & ]] // Timing
The test #1[[1,1]] == #2[[1,1]] & is used by Split to start a new List
whenever the row index changes. Most is required to drop the default
case, {_, _} -> 0.
For the full array, the following is quite reasonable:
Most[Length /@ Split[ArrayRules[sa], #1[[1,1]]==#2[[1,1]]&]]; //Timing
In the advanced documentation for Linear Algebra -- there is a link to
this from the Help Browser entry for SparseArray -- you will find that
SparseArray uses the Compressed Sparse Row (CSR) format as an internal
storage format. You can view this information using InputForm. For
example,
SparseArray[{{1, 1} -> 1, {2, 3} -> 2, {2, 2} -> 1}, {3, 4}]] //
InputForm
The 4-th part of the data structure records the cumulative sum of
non-zero entries. However, there seems to be no simple way to access
this information. Usually, for special formats you can use Part to
extract such information but Part is interpreted by SparseArray,
circumventing this.
Paul
--
Paul Abbott Phone: +61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul
http://InternationalMathematicaSymposium.org/IMS2005/
===
Subject: Re: letrec/named let
Indeed, for real data, your substitute for ordering is much faster (in
5.1.1) than the built-in. If it's 2.5 times SLOWER on your machine, that does
seem to imply WRI has radically DECREASED Ordering's performance from 4.1 to
5.1.1.
It's hard to believe they'd do that, but apparently they have.
> The only possibility I can think of is that
> ord=Ordering[data] returns a packed array of integers when data is
integer,
> while ord=Ordering[data] returns an unpacked array of integers when data
is
> real.
Exactly right. See below. Maybe this explains the performance issue for
Ordering itself, too.
(Note: 'data' is packed in both tests.)
pq = Developer`PackedArrayQ;
carlTimed[s_] := Module[{ord, t, o, ans},
Print@Timing[ord = Ordering@s; Ordering];
Print@Timing[t = FoldList[
Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]]; FoldList];
Print@Timing[o = Ordering@ord; Ordering];
Print@Timing[ans = t[[o]]; Part];
Print[pq /@ {ord, t, o, ans}];
ans]
ordering[x_List] := Round@Sort[Transpose[{x, N@Range@Length@x}]][[All, 2]]
carlNewOrder[s_] := Module[{ord, t, o, ans},
Print@Timing[ord = ordering@s; ordering];
Print@Timing[
t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
FoldList];
Print@Timing[o = ordering@ord; ordering];
Print@Timing[ans = t[[o]]; Part];
Print[pq /@ {ord, t, o, ans}];
ans]
data = Table[Random[], {10^6}];
Timing[carlTimed@data; Total]
Timing[carlNewOrder@data; Total]
{8. Second,Ordering}
{0.391 Second,FoldList}
{7.25 Second,Ordering}
{0.063 Second,Part}
{False,True,True,True}
{15.891 Second,Total}
{1.063 Second,ordering}
{0.343 Second,FoldList}
{7.36 Second,ordering}
{0.062 Second,Part}
{True,True,True,True}
{8.828 Second,Total}
Notice ordering returned a packed array (ord) for real data, but Ordering
didn't. Also notice the second use (with Integer data) has Ordering and
ordering equally fast, but that's despite ordering being applied to a packed
integer array, but Ordering applied to an unpacked array.
For integer (Poisson) data, Ordering is much faster than ordering BOTH times
it is used. (It's applied to, and returns, packed arrays each time in both
routines.)
data = 1 + RandomArray[PoissonDistribution[2], {10^6}];
Timing[carlTimed@data; Total]
Timing[carlNewOrder@data; Total]
{0.781 Second,Ordering}
{0.266 Second,FoldList}
{0.64 Second,Ordering}
{0.016 Second,Part}
{True,True,True,True}
{1.703 Second,Total}
{6.969 Second,ordering}
{0.234 Second,FoldList}
{5.703 Second,ordering}
{0.031 Second,Part}
{True,True,True,True}
{12.937 Second,Total}
But you intended 'ordering' only for real data anyway.
Bobby
> ----- Original Message -----
===
> Subject: Re: letrec/named let
>>> Union[data] now takes 0.626 seconds, while Ordering[data]
>>> takes 8.031 seconds. On my machine Union[data] takes 1.2
>>> seconds and Ordering[data] takes 1.1 seconds.
>> That can have many interpretations, can't it?
>> For instance...
>> eqns = Thread[{1.2, 1.1}/{0.626, 8.031} ==
>> (myMachine/yourMachine)* {unionSpeedup, orderingSpeedup}];
>> {1/orderingSpeedup, myMachine/yourMachine} /.
>> ToRules@Reduce@Flatten@{eqns, unionSpeedup == 1}
>> {13.9954, 1.91693}
>> That says my machine is 1.9 times as fast as yours (not unlikely), Union
>> is equally fast in both versions (not unlikely), and Ordering is 14
times
>> faster in 4.1 than 5.1.1 (possible, but unlikely).
>> or...
>> {unionSpeedup, yourMachine/
>> myMachine} /. ToRules@Reduce@Flatten@{eqns, orderingSpeedup == 1}
>> {13.9954,7.30091}
>> In that case, your machine is 7.3 times faster than mine (unlikely, but
>> possible), Union is 14 times faster in 5.1.1 than 4.1 (quite possible),
>> and Ordering is equally fast in both versions (also possible).
>> I suspect BOTH routines are faster in the new version, but the
improvement
>> in Union is 14 times larger than the improvement in Ordering:
>> unionSpeedup/orderingSpeedup /. ToRules@Reduce@eqns
>> 13.9954
>> Bobby
> Let's first compare Union and Sort. I assume that with data being a list
of
> a million reals, that Sort[data] should take about the same time or a
little
> less than Union[data], since Union also sorts. I find it unlikely that
Sort,
> whose algorithmic complexity was figured out ages ago, could experience
an
> order of magnitude increase in speed in the latest upgrade. Moreover, it
> seems like the kind of thing that Wolfram would advertise if such a speed
> increase occurred.
> At any rate, if Sort and Union take the same amount of time, that means
Sort
> is about 13 times quicker than Ordering on your machine. Now, the
algorithm
> for Sort should be essentially the same as the algorithm for Ordering, so
> I'm surprised that Ordering is so slow. In fact, it might be possible to
> come up with a function which is faster than the built in Ordering. For
> example,
> ordering[x:{__Real}] := Round @ Sort[ Transpose[{x,
> N@Range@Length@x}]][[All,2]]
> is only 2 1/2 times slower than Ordering for Reals on my machine, and is
> probably much faster than the built in Ordering on your machine.
> I'm also puzzled by some of your timings. The 3rd line of my function,
which
> you call Part, takes 10 times as long when the data is real. But,
> t[[Ordering[ord]]] applies Ordering to ord, and ord=Ordering[data] is
always
> a permutation of the first Length[data] integers. Moreover, t is always a
> list of Length[data] integers. Why does t[[Ordering[ord]]] take so long
when
> ord is the Ordering of real data, but is so quick when ord is the
Ordering
> of integer data? The only possibility I can think of is that
> ord=Ordering[data] returns a packed array of integers when data is
integer,
> while ord=Ordering[data] returns an unpacked array of integers when data
is
> real.
> Carl Woll
>> On Fri, 6 May 2005 19:32:32 -0400, Carl K. Woll
>>> ----- Original Message -----
===
>>> Subject: Re: letrec/named let
>>>> I was assuming nothing in your code would be slower in the newest
>>>> version,
>>>> so I was hoping to identify a component that had gotten faster from
>>>> version
>>>> to version in MY code. Perhaps Union is 12 times faster in 5.1.1, for
>>>> instance.
>>>>
>>>> Either way, as I said before, we need timings on both machines to
narrow
>>>> it down.
>>>>
>>>> Nevertheless, here are timings for statements in carl:
>>>>
>>>> carlTimed[s_]:=Module[{ord,t,ans},
>>>> Print@Timing[ord=Ordering@s;Ordering];
>>>>
>>>>
Print@Timing[t=FoldList[Plus,1,Sign[Abs[ListCorrelate[{1,-1},s[[ord]]]]]];
>>>> FoldList];
>>>> Print@Timing[ans=t[[Ordering[ord]]];Part]]
>>>>
>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>> Timing[three=carlTimed@data;]
>>>>
>>>> {0.813 Second,Ordering}
>>>> {0.281 Second,FoldList}
>>>> {0.641 Second,Part}
>>>> {1.735 Second,Null}
>>>>
>>>> data=Table[Random[],{10^6}];
>>>> Timing[three=carlTimed@data;]
>>>>
>>>> {8.031 Second,Ordering}
>>>> {0.391 Second,FoldList}
>>>> {7.281 Second,Part}
>>>> {15.906 Second,Null}
>>>>
>>>> Bobby
>>>>
>>> And so we discover that when data is a list of a million reals,
>>> Union[data]
>>> now takes 0.626 seconds, while Ordering[data] takes 8.031 seconds. On
my
>>> machine Union[data] takes 1.2 seconds and Ordering[data] takes 1.1
>>> seconds.
>>> I think we've found the bottleneck. Ordering is no longer a very fast
>>> function, at least when it's argument is a list of reals.
>>> Carl Woll
>>>> On Fri, 6 May 2005 18:56:03 -0400, Carl K. Woll
>>>>
>>>>> ----- Original Message -----
===
>>>>> Subject: Re: letrec/named let
>>>>>
>>>>>
>>>>>> I'm guessing Union, ReplaceAll, Dispatch, and/or Thread are better
>>>>>> optimized in version 5.1.1. We'd have to get relative times for
>>>>>> various
>>>>>> statements on BOTH our machines, to narrow it down. Here they are
for
>>>>>> my
>>>>>> machine:
>>>>>>
>>>>>> treat2[s_List] := Module[{u, t, d, ans},
>>>>>> Print@Timing[u = Union@s; Union];
>>>>>> Print@Timing[t = Thread[u -> Range@Length@u]; Thread];
>>>>>> Print@Timing[d = Dispatch@t; Dispatch];
>>>>>> Print@Timing[ans = s /. d; ReplaceAll];
>>>>>> ans]
>>>>>>
>>>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>>>> Timing[treat2@data;]
>>>>>>
>>>>>> {0.265 Second,Union}
>>>>>> {0. Second,Thread}
>>>>>> {0. Second,Dispatch}
>>>>>> {0.297 Second,ReplaceAll}
>>>>>> {0.562 Second,Null}
>>>>>>
>>>>>> data=Table[Random[],{10^6}];
>>>>>> Timing[treat2@data;]
>>>>>>
>>>>>> {0.625 Second,Union}
>>>>>> {2.922 Second,Thread}
>>>>>> {1.625 Second,Dispatch}
>>>>>> {3.89 Second,ReplaceAll}
>>>>>> {10.328 Second,Null}
>>>>>>
>>>>>> If your machine has Dispatch (for instance) taking a larger fraction
>>>>>> of
>>>>>> the total time, we'll have a clue to what changed.
>>>>>>
>>>>>> I'm actually shocked by how fast this is, considering the relatively
>>>>>> naive
>>>>>> (IMO) method involved.
>>>>>>
>>>>>> Bobby
>>>>>>
>>>>>
>>>>> I'm not surprised by the timings you give for treat. I'm surprised by
>>>>> your
>>>>> timing for my function, carl. On my machine, applying my function to
a
>>>>> million reals takes about 2 times as long as a Union. On your
machine,
>>>>> it
>>>>> apparently takes about 24 times as long as a Union. The 1st and 3rd
>>>>> lines
>>>>> of
>>>>> my function are basically sorting operations, so the bottleneck on
your
>>>>> machine must be the 2nd line. However, on my machine, the 2nd line is
>>>>> the
>>>>> fastest operation. The only candidates for a bottleneck seem to be
>>>>> FoldList
>>>>> and ListCorrelate, or perhaps something funny is going on with packed
>>>>> arrays. I'd be curious what timings my function would get if you
>>>>> treated
>>>>> it like treat2 .
>>>>>
>>>>> Carl Woll
>>>>>
>>>>>> On Fri, 6 May 2005 14:12:55 -0400, Carl K. Woll
>>>>>>
>>>>>>> ----- Original Message -----
===
>>>>>>> Subject: Re: letrec/named let
>>>>>>>
>>>>>>>
>>>>>>>> That's a very nice rescue of the Ordering@Ordering solution.
>>>>>>>>
>>>>>>>> Here's a test with (probably) no repeated elements, however, that
>>>>>>>> shows
>>>>>>>> treat ahead. Perhaps there's an intermediate situation where carl
>>>>>>>> wins,
>>>>>>>> or
>>>>>>>> it matters whether the data is integer or real?
>>>>>>>>
>>>>>>>> data=Table[Random[],{10^6}];
>>>>>>>> Timing[one=treat@data;]
>>>>>>>> Timing[three=carl@data;]
>>>>>>>> one==three
>>>>>>>>
>>>>>>>> {10.109 Second,Null}
>>>>>>>> {15.938 Second,Null}
>>>>>>>> True
>>>>>>>>
>>>>>>>
>>>>>>> Interesting. On my machine, carl is over 5 times faster than treat
>>>>>>> for
>>>>>>> the
>>>>>>> same test. I have version 4.1 on a Windows XP operating system.
What
>>>>>>> is
>>>>>>> your
>>>>>>> setup? Perhaps you could test each individual line of carl to see
>>>>>>> where
>>>>>>> the
>>>>>>> bottleneck is.
>>>>>>>
>>>>>>> Carl Woll
>>>>>>>
>>>>>>>> Bobby
>>>>>>>>
>>>>>>>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>>>>>>
>>>>>>>>
>>>>>>>>>> Our solutions agree on lists of strictly positive integers, but
>>>>>>>>>> timings
>>>>>>>>>> depend a great deal on the minimum value:
>>>>>>>>>>
>>>>>>>>>> Clear[treat, andrzej]
>>>>>>>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
>>>>>>>>>> Range@
>>>>>>>>>> Length@u]]
>>>>>>>>>> andrzej[s_List] :=
>>>>>>>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>>>>>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>>>>>>>> Last[#] < Max[First[#]] &]
>>>>>>>>>>
>>>>>>>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>>>>>>>> Timing[one=treat@data;]
>>>>>>>>>> Timing[two=andrzej@data;]
>>>>>>>>>> one == two
>>>>>>>>>>
>>>>>>>>>> {0.593 Second,Null}
>>>>>>>>>> {0.657 Second,Null}
>>>>>>>>>> True
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Another possibility modeled after DrBob's first answer is the
>>>>>>>>> following:
>>>>>>>>>
>>>>>>>>> carl[s_] := Module[{ord,t},
>>>>>>>>> ord = Ordering[s];
>>>>>>>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1},
>>>>>>>>> s[[ord]]]]]];
>>>>>>>>> t[[Ordering[ord]]]
>>>>>>>>> ]
>>>>>>>>>
>>>>>>>>> If there are a lot of repeated elements in the data, then treat
>>>>>>>>> seems
>>>>>>>>> to
>>>>>>>>> be
>>>>>>>>> faster. On the other hand, if there aren't a lot of repeated
>>>>>>>>> elements,
>>>>>>>>> then
>>>>>>>>> carl seems to be faster. It seems like it ought to be possible to
>>>>>>>>> compute
>>>>>>>>> Ordering[Ordering[data]] more quickly since
>>>>>>>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
>>>>>>>>> couldn't
>>>>>>>>> think of a way.
>>>>>>>>>
>>>>>>>>> Carl Woll
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> DrBob@bigfoot.com
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> DrBob@bigfoot.com
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> DrBob@bigfoot.com
>>>>
>>>>
>> --
>> DrBob@bigfoot.com
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
----- Original Message -----
===
Subject: Re: letrec/named let
>I was assuming nothing in your code would be slower in the newest version,
>so I was hoping to identify a component that had gotten faster from version
>to version in MY code. Perhaps Union is 12 times faster in 5.1.1, for
>instance.
> Either way, as I said before, we need timings on both machines to narrow
> it down.
> Nevertheless, here are timings for statements in carl:
> carlTimed[s_]:=Module[{ord,t,ans},
> Print@Timing[ord=Ordering@s;Ordering];
>
Print@Timing[t=FoldList[Plus,1,Sign[Abs[ListCorrelate[{1,-1},s[[ord]]]]]];
> FoldList];
> Print@Timing[ans=t[[Ordering[ord]]];Part]]
> data=1+RandomArray[PoissonDistribution[2],{10^6}];
> Timing[three=carlTimed@data;]
> {0.813 Second,Ordering}
> {0.281 Second,FoldList}
> {0.641 Second,Part}
> {1.735 Second,Null}
> data=Table[Random[],{10^6}];
> Timing[three=carlTimed@data;]
> {8.031 Second,Ordering}
> {0.391 Second,FoldList}
> {7.281 Second,Part}
> {15.906 Second,Null}
> Bobby
And so we discover that when data is a list of a million reals, Union[data]
now takes 0.626 seconds, while Ordering[data] takes 8.031 seconds. On my
machine Union[data] takes 1.2 seconds and Ordering[data] takes 1.1 seconds.
I think we've found the bottleneck. Ordering is no longer a very fast
function, at least when it's argument is a list of reals.
Carl Woll
> On Fri, 6 May 2005 18:56:03 -0400, Carl K. Woll
>> ----- Original Message -----
===
>> Subject: Re: letrec/named let
>>> I'm guessing Union, ReplaceAll, Dispatch, and/or Thread are better
>>> optimized in version 5.1.1. We'd have to get relative times for various
>>> statements on BOTH our machines, to narrow it down. Here they are for
my
>>> machine:
>>> treat2[s_List] := Module[{u, t, d, ans},
>>> Print@Timing[u = Union@s; Union];
>>> Print@Timing[t = Thread[u -> Range@Length@u]; Thread];
>>> Print@Timing[d = Dispatch@t; Dispatch];
>>> Print@Timing[ans = s /. d; ReplaceAll];
>>> ans]
>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>> Timing[treat2@data;]
>>> {0.265 Second,Union}
>>> {0. Second,Thread}
>>> {0. Second,Dispatch}
>>> {0.297 Second,ReplaceAll}
>>> {0.562 Second,Null}
>>> data=Table[Random[],{10^6}];
>>> Timing[treat2@data;]
>>> {0.625 Second,Union}
>>> {2.922 Second,Thread}
>>> {1.625 Second,Dispatch}
>>> {3.89 Second,ReplaceAll}
>>> {10.328 Second,Null}
>>> If your machine has Dispatch (for instance) taking a larger fraction of
>>> the total time, we'll have a clue to what changed.
>>> I'm actually shocked by how fast this is, considering the relatively
>>> naive
>>> (IMO) method involved.
>>> Bobby
>> I'm not surprised by the timings you give for treat. I'm surprised by
>> your
>> timing for my function, carl. On my machine, applying my function to a
>> million reals takes about 2 times as long as a Union. On your machine,
it
>> apparently takes about 24 times as long as a Union. The 1st and 3rd lines
>> of
>> my function are basically sorting operations, so the bottleneck on your
>> machine must be the 2nd line. However, on my machine, the 2nd line is
the
>> fastest operation. The only candidates for a bottleneck seem to be
>> FoldList
>> and ListCorrelate, or perhaps something funny is going on with packed
>> arrays. I'd be curious what timings my function would get if you
>> treated
>> it like treat2 .
>> Carl Woll
>>> On Fri, 6 May 2005 14:12:55 -0400, Carl K. Woll
>>>> ----- Original Message -----
===
>>>> Subject: Re: letrec/named let
>>>>
>>>>
>>>>> That's a very nice rescue of the Ordering@Ordering solution.
>>>>>
>>>>> Here's a test with (probably) no repeated elements, however, that
>>>>> shows
>>>>> treat ahead. Perhaps there's an intermediate situation where carl
>>>>> wins,
>>>>> or
>>>>> it matters whether the data is integer or real?
>>>>>
>>>>> data=Table[Random[],{10^6}];
>>>>> Timing[one=treat@data;]
>>>>> Timing[three=carl@data;]
>>>>> one==three
>>>>>
>>>>> {10.109 Second,Null}
>>>>> {15.938 Second,Null}
>>>>> True
>>>>>
>>>>
>>>> Interesting. On my machine, carl is over 5 times faster than treat for
>>>> the
>>>> same test. I have version 4.1 on a Windows XP operating system. What
is
>>>> your
>>>> setup? Perhaps you could test each individual line of carl to see
where
>>>> the
>>>> bottleneck is.
>>>>
>>>> Carl Woll
>>>>
>>>>> Bobby
>>>>>
>>>>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>>>
>>>>>
>>>>>>> Our solutions agree on lists of strictly positive integers, but
>>>>>>> timings
>>>>>>> depend a great deal on the minimum value:
>>>>>>>
>>>>>>> Clear[treat, andrzej]
>>>>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
>>>>>>> Range@
>>>>>>> Length@u]]
>>>>>>> andrzej[s_List] :=
>>>>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>>>>> Last[#] < Max[First[#]] &]
>>>>>>>
>>>>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>>>>> Timing[one=treat@data;]
>>>>>>> Timing[two=andrzej@data;]
>>>>>>> one == two
>>>>>>>
>>>>>>> {0.593 Second,Null}
>>>>>>> {0.657 Second,Null}
>>>>>>> True
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Another possibility modeled after DrBob's first answer is the
>>>>>> following:
>>>>>>
>>>>>> carl[s_] := Module[{ord,t},
>>>>>> ord = Ordering[s];
>>>>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>>>>>> t[[Ordering[ord]]]
>>>>>> ]
>>>>>>
>>>>>> If there are a lot of repeated elements in the data, then treat
seems
>>>>>> to
>>>>>> be
>>>>>> faster. On the other hand, if there aren't a lot of repeated
>>>>>> elements,
>>>>>> then
>>>>>> carl seems to be faster. It seems like it ought to be possible to
>>>>>> compute
>>>>>> Ordering[Ordering[data]] more quickly since
>>>>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
>>>>>> couldn't
>>>>>> think of a way.
>>>>>>
>>>>>> Carl Woll
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> DrBob@bigfoot.com
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> --
>>> DrBob@bigfoot.com
> --
> DrBob@bigfoot.com
===
Subject: Re: letrec/named let
I was assuming nothing in your code would be slower in the newest version,
so I was hoping to identify a component that had gotten faster from version
to version in MY code. Perhaps Union is 12 times faster in 5.1.1, for
instance.
Either way, as I said before, we need timings on both machines to narrow it
down.
Nevertheless, here are timings for statements in carl:
carlTimed[s_]:=Module[{ord,t,ans},
Print@Timing[ord=Ordering@s;Ordering];
Print@Timing[t=FoldList[Plus,1,Sign[Abs[ListCorrelate[{1,-1},s[[ord]]]]]];
FoldList];
Print@Timing[ans=t[[Ordering[ord]]];Part]]
data=1+RandomArray[PoissonDistribution[2],{10^6}];
Timing[three=carlTimed@data;]
{0.813 Second,Ordering}
{0.281 Second,FoldList}
{0.641 Second,Part}
{1.735 Second,Null}
data=Table[Random[],{10^6}];
Timing[three=carlTimed@data;]
{8.031 Second,Ordering}
{0.391 Second,FoldList}
{7.281 Second,Part}
{15.906 Second,Null}
Bobby
> ----- Original Message -----
===
> Subject: Re: letrec/named let
>> I'm guessing Union, ReplaceAll, Dispatch, and/or Thread are better
>> optimized in version 5.1.1. We'd have to get relative times for various
>> statements on BOTH our machines, to narrow it down. Here they are for my
>> machine:
>> treat2[s_List] := Module[{u, t, d, ans},
>> Print@Timing[u = Union@s; Union];
>> Print@Timing[t = Thread[u -> Range@Length@u]; Thread];
>> Print@Timing[d = Dispatch@t; Dispatch];
>> Print@Timing[ans = s /. d; ReplaceAll];
>> ans]
>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>> Timing[treat2@data;]
>> {0.265 Second,Union}
>> {0. Second,Thread}
>> {0. Second,Dispatch}
>> {0.297 Second,ReplaceAll}
>> {0.562 Second,Null}
>> data=Table[Random[],{10^6}];
>> Timing[treat2@data;]
>> {0.625 Second,Union}
>> {2.922 Second,Thread}
>> {1.625 Second,Dispatch}
>> {3.89 Second,ReplaceAll}
>> {10.328 Second,Null}
>> If your machine has Dispatch (for instance) taking a larger fraction of
>> the total time, we'll have a clue to what changed.
>> I'm actually shocked by how fast this is, considering the relatively
naive
>> (IMO) method involved.
>> Bobby
> I'm not surprised by the timings you give for treat. I'm surprised by
your
> timing for my function, carl. On my machine, applying my function to a
> million reals takes about 2 times as long as a Union. On your machine, it
> apparently takes about 24 times as long as a Union. The 1st and 3rd lines
of
> my function are basically sorting operations, so the bottleneck on your
> machine must be the 2nd line. However, on my machine, the 2nd line is the
> fastest operation. The only candidates for a bottleneck seem to be
FoldList
> and ListCorrelate, or perhaps something funny is going on with packed
> arrays. I'd be curious what timings my function would get if you
treated
> it like treat2 .
> Carl Woll
>> On Fri, 6 May 2005 14:12:55 -0400, Carl K. Woll
>>> ----- Original Message -----
>>>
===
>>> Subject: Re: letrec/named let
>>>> That's a very nice rescue of the Ordering@Ordering solution.
>>>>
>>>> Here's a test with (probably) no repeated elements, however, that
shows
>>>> treat ahead. Perhaps there's an intermediate situation where carl
wins,
>>>> or
>>>> it matters whether the data is integer or real?
>>>>
>>>> data=Table[Random[],{10^6}];
>>>> Timing[one=treat@data;]
>>>> Timing[three=carl@data;]
>>>> one==three
>>>>
>>>> {10.109 Second,Null}
>>>> {15.938 Second,Null}
>>>> True
>>>>
>>> Interesting. On my machine, carl is over 5 times faster than treat for
>>> the
>>> same test. I have version 4.1 on a Windows XP operating system. What is
>>> your
>>> setup? Perhaps you could test each individual line of carl to see where
>>> the
>>> bottleneck is.
>>> Carl Woll
>>>> Bobby
>>>>
>>>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>>
>>>>>> Our solutions agree on lists of strictly positive integers, but
>>>>>> timings
>>>>>> depend a great deal on the minimum value:
>>>>>>
>>>>>> Clear[treat, andrzej]
>>>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
>>>>>> Range@
>>>>>> Length@u]]
>>>>>> andrzej[s_List] :=
>>>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>>>> Last[#] < Max[First[#]] &]
>>>>>>
>>>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>>>> Timing[one=treat@data;]
>>>>>> Timing[two=andrzej@data;]
>>>>>> one == two
>>>>>>
>>>>>> {0.593 Second,Null}
>>>>>> {0.657 Second,Null}
>>>>>> True
>>>>>>
>>>>>
>>>>>
>>>>> Another possibility modeled after DrBob's first answer is the
>>>>> following:
>>>>>
>>>>> carl[s_] := Module[{ord,t},
>>>>> ord = Ordering[s];
>>>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>>>>> t[[Ordering[ord]]]
>>>>> ]
>>>>>
>>>>> If there are a lot of repeated elements in the data, then treat seems
>>>>> to
>>>>> be
>>>>> faster. On the other hand, if there aren't a lot of repeated
elements,
>>>>> then
>>>>> carl seems to be faster. It seems like it ought to be possible to
>>>>> compute
>>>>> Ordering[Ordering[data]] more quickly since
>>>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
>>>>> couldn't
>>>>> think of a way.
>>>>>
>>>>> Carl Woll
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> DrBob@bigfoot.com
>>>>
>>>>
>> --
>> DrBob@bigfoot.com
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
Bobby
> ----- Original Message -----
===
> Subject: Re: letrec/named let
>> That's a very nice rescue of the Ordering@Ordering solution.
>> Here's a test with (probably) no repeated elements, however, that shows
>> treat ahead. Perhaps there's an intermediate situation where carl wins,
or
>> it matters whether the data is integer or real?
>> data=Table[Random[],{10^6}];
>> Timing[one=treat@data;]
>> Timing[three=carl@data;]
>> one==three
>> {10.109 Second,Null}
>> {15.938 Second,Null}
>> True
> Interesting. On my machine, carl is over 5 times faster than treat for
the
> same test. I have version 4.1 on a Windows XP operating system. What is
your
> setup? Perhaps you could test each individual line of carl to see where
the
> bottleneck is.
> Carl Woll
>> Bobby
>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>> Our solutions agree on lists of strictly positive integers, but
timings
>>>> depend a great deal on the minimum value:
>>>>
>>>> Clear[treat, andrzej]
>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
Range@
>>>> Length@u]]
>>>> andrzej[s_List] :=
>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>> Last[#] < Max[First[#]] &]
>>>>
>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>> Timing[one=treat@data;]
>>>> Timing[two=andrzej@data;]
>>>> one == two
>>>>
>>>> {0.593 Second,Null}
>>>> {0.657 Second,Null}
>>>> True
>>>>
>>> Another possibility modeled after DrBob's first answer is the
following:
>>> carl[s_] := Module[{ord,t},
>>> ord = Ordering[s];
>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>>> t[[Ordering[ord]]]
>>> ]
>>> If there are a lot of repeated elements in the data, then treat seems
to
>>> be
>>> faster. On the other hand, if there aren't a lot of repeated elements,
>>> then
>>> carl seems to be faster. It seems like it ought to be possible to
compute
>>> Ordering[Ordering[data]] more quickly since
>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
couldn't
>>> think of a way.
>>> Carl Woll
>> --
>> DrBob@bigfoot.com
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
----- Original Message -----
===
Subject: Re: letrec/named let
> I'm guessing Union, ReplaceAll, Dispatch, and/or Thread are better
> optimized in version 5.1.1. We'd have to get relative times for various
> statements on BOTH our machines, to narrow it down. Here they are for my
> machine:
> treat2[s_List] := Module[{u, t, d, ans},
> Print@Timing[u = Union@s; Union];
> Print@Timing[t = Thread[u -> Range@Length@u]; Thread];
> Print@Timing[d = Dispatch@t; Dispatch];
> Print@Timing[ans = s /. d; ReplaceAll];
> ans]
> data=1+RandomArray[PoissonDistribution[2],{10^6}];
> Timing[treat2@data;]
> {0.265 Second,Union}
> {0. Second,Thread}
> {0. Second,Dispatch}
> {0.297 Second,ReplaceAll}
> {0.562 Second,Null}
> data=Table[Random[],{10^6}];
> Timing[treat2@data;]
> {0.625 Second,Union}
> {2.922 Second,Thread}
> {1.625 Second,Dispatch}
> {3.89 Second,ReplaceAll}
> {10.328 Second,Null}
> If your machine has Dispatch (for instance) taking a larger fraction of
> the total time, we'll have a clue to what changed.
> I'm actually shocked by how fast this is, considering the relatively naive
> (IMO) method involved.
> Bobby
I'm not surprised by the timings you give for treat. I'm surprised by your
timing for my function, carl. On my machine, applying my function to a
million reals takes about 2 times as long as a Union. On your machine, it
apparently takes about 24 times as long as a Union. The 1st and 3rd lines of
my function are basically sorting operations, so the bottleneck on your
machine must be the 2nd line. However, on my machine, the 2nd line is the
fastest operation. The only candidates for a bottleneck seem to be FoldList
and ListCorrelate, or perhaps something funny is going on with packed
arrays. I'd be curious what timings my function would get if you treated
it like treat2 .
Carl Woll
> On Fri, 6 May 2005 14:12:55 -0400, Carl K. Woll
>> ----- Original Message -----
===
>> Subject: Re: letrec/named let
>>> That's a very nice rescue of the Ordering@Ordering solution.
>>> Here's a test with (probably) no repeated elements, however, that shows
>>> treat ahead. Perhaps there's an intermediate situation where carl wins,
>>> or
>>> it matters whether the data is integer or real?
>>> data=Table[Random[],{10^6}];
>>> Timing[one=treat@data;]
>>> Timing[three=carl@data;]
>>> one==three
>>> {10.109 Second,Null}
>>> {15.938 Second,Null}
>>> True
>> Interesting. On my machine, carl is over 5 times faster than treat for
>> the
>> same test. I have version 4.1 on a Windows XP operating system. What is
>> your
>> setup? Perhaps you could test each individual line of carl to see where
>> the
>> bottleneck is.
>> Carl Woll
>>> Bobby
>>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>>> Our solutions agree on lists of strictly positive integers, but
>>>>> timings
>>>>> depend a great deal on the minimum value:
>>>>>
>>>>> Clear[treat, andrzej]
>>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
>>>>> Range@
>>>>> Length@u]]
>>>>> andrzej[s_List] :=
>>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>>> Last[#] < Max[First[#]] &]
>>>>>
>>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>>> Timing[one=treat@data;]
>>>>> Timing[two=andrzej@data;]
>>>>> one == two
>>>>>
>>>>> {0.593 Second,Null}
>>>>> {0.657 Second,Null}
>>>>> True
>>>>>
>>>>
>>>>
>>>> Another possibility modeled after DrBob's first answer is the
>>>> following:
>>>>
>>>> carl[s_] := Module[{ord,t},
>>>> ord = Ordering[s];
>>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>>>> t[[Ordering[ord]]]
>>>> ]
>>>>
>>>> If there are a lot of repeated elements in the data, then treat seems
>>>> to
>>>> be
>>>> faster. On the other hand, if there aren't a lot of repeated elements,
>>>> then
>>>> carl seems to be faster. It seems like it ought to be possible to
>>>> compute
>>>> Ordering[Ordering[data]] more quickly since
>>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
>>>> couldn't
>>>> think of a way.
>>>>
>>>> Carl Woll
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> --
>>> DrBob@bigfoot.com
> --
> DrBob@bigfoot.com
===
Subject: Re: letrec/named let
I'm guessing Union, ReplaceAll, Dispatch, and/or Thread are better optimized
in version 5.1.1. We'd have to get relative times for various statements on
BOTH our machines, to narrow it down. Here they are for my machine:
treat2[s_List] := Module[{u, t, d, ans},
Print@Timing[u = Union@s; Union];
Print@Timing[t = Thread[u -> Range@Length@u]; Thread];
Print@Timing[d = Dispatch@t; Dispatch];
Print@Timing[ans = s /. d; ReplaceAll];
ans]
data=1+RandomArray[PoissonDistribution[2],{10^6}];
Timing[treat2@data;]
{0.265 Second,Union}
{0. Second,Thread}
{0. Second,Dispatch}
{0.297 Second,ReplaceAll}
{0.562 Second,Null}
data=Table[Random[],{10^6}];
Timing[treat2@data;]
{0.625 Second,Union}
{2.922 Second,Thread}
{1.625 Second,Dispatch}
{3.89 Second,ReplaceAll}
{10.328 Second,Null}
If your machine has Dispatch (for instance) taking a larger fraction of the
total time, we'll have a clue to what changed.
I'm actually shocked by how fast this is, considering the relatively naive
(IMO) method involved.
Bobby
> ----- Original Message -----
===
> Subject: Re: letrec/named let
>> That's a very nice rescue of the Ordering@Ordering solution.
>> Here's a test with (probably) no repeated elements, however, that shows
>> treat ahead. Perhaps there's an intermediate situation where carl wins,
or
>> it matters whether the data is integer or real?
>> data=Table[Random[],{10^6}];
>> Timing[one=treat@data;]
>> Timing[three=carl@data;]
>> one==three
>> {10.109 Second,Null}
>> {15.938 Second,Null}
>> True
> Interesting. On my machine, carl is over 5 times faster than treat for
the
> same test. I have version 4.1 on a Windows XP operating system. What is
your
> setup? Perhaps you could test each individual line of carl to see where
the
> bottleneck is.
> Carl Woll
>> Bobby
>> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>>> Our solutions agree on lists of strictly positive integers, but
timings
>>>> depend a great deal on the minimum value:
>>>>
>>>> Clear[treat, andrzej]
>>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u ->
Range@
>>>> Length@u]]
>>>> andrzej[s_List] :=
>>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>>> Last[#] < Max[First[#]] &]
>>>>
>>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>>> Timing[one=treat@data;]
>>>> Timing[two=andrzej@data;]
>>>> one == two
>>>>
>>>> {0.593 Second,Null}
>>>> {0.657 Second,Null}
>>>> True
>>>>
>>> Another possibility modeled after DrBob's first answer is the
following:
>>> carl[s_] := Module[{ord,t},
>>> ord = Ordering[s];
>>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>>> t[[Ordering[ord]]]
>>> ]
>>> If there are a lot of repeated elements in the data, then treat seems
to
>>> be
>>> faster. On the other hand, if there aren't a lot of repeated elements,
>>> then
>>> carl seems to be faster. It seems like it ought to be possible to
compute
>>> Ordering[Ordering[data]] more quickly since
>>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I
couldn't
>>> think of a way.
>>> Carl Woll
>> --
>> DrBob@bigfoot.com
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
----- Original Message -----
===
Subject: Re: letrec/named let
> That's a very nice rescue of the Ordering@Ordering solution.
> Here's a test with (probably) no repeated elements, however, that shows
> treat ahead. Perhaps there's an intermediate situation where carl wins, or
> it matters whether the data is integer or real?
> data=Table[Random[],{10^6}];
> Timing[one=treat@data;]
> Timing[three=carl@data;]
> one==three
> {10.109 Second,Null}
> {15.938 Second,Null}
> True
Interesting. On my machine, carl is over 5 times faster than treat for the
same test. I have version 4.1 on a Windows XP operating system. What is your
setup? Perhaps you could test each individual line of carl to see where the
bottleneck is.
Carl Woll
> Bobby
> On Fri, 6 May 2005 13:26:03 -0400, Carl K. Woll
>>> Our solutions agree on lists of strictly positive integers, but timings
>>> depend a great deal on the minimum value:
>>> Clear[treat, andrzej]
>>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u -> Range@
>>> Length@u]]
>>> andrzej[s_List] :=
>>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>>> Last[#] < Max[First[#]] &]
>>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>>> Timing[one=treat@data;]
>>> Timing[two=andrzej@data;]
>>> one == two
>>> {0.593 Second,Null}
>>> {0.657 Second,Null}
>>> True
>> Another possibility modeled after DrBob's first answer is the following:
>> carl[s_] := Module[{ord,t},
>> ord = Ordering[s];
>> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>> t[[Ordering[ord]]]
>> ]
>> If there are a lot of repeated elements in the data, then treat seems to
>> be
>> faster. On the other hand, if there aren't a lot of repeated elements,
>> then
>> carl seems to be faster. It seems like it ought to be possible to
compute
>> Ordering[Ordering[data]] more quickly since
>> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I couldn't
>> think of a way.
>> Carl Woll
> --
> DrBob@bigfoot.com
===
Subject: Re: letrec/named let
That's a very nice rescue of the Ordering@Ordering solution.
Here's a test with (probably) no repeated elements, however, that shows
treat ahead. Perhaps there's an intermediate situation where carl wins, or it
matters whether the data is integer or real?
data=Table[Random[],{10^6}];
Timing[one=treat@data;]
Timing[three=carl@data;]
one==three
{10.109 Second,Null}
{15.938 Second,Null}
True
Bobby
>> Our solutions agree on lists of strictly positive integers, but timings
>> depend a great deal on the minimum value:
>> Clear[treat, andrzej]
>> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u -> Range@
>> Length@u]]
>> andrzej[s_List] :=
>> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
>> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
>> Last[#] < Max[First[#]] &]
>> data=1+RandomArray[PoissonDistribution[2],{10^6}];
>> Timing[one=treat@data;]
>> Timing[two=andrzej@data;]
>> one == two
>> {0.593 Second,Null}
>> {0.657 Second,Null}
>> True
> Another possibility modeled after DrBob's first answer is the following:
> carl[s_] := Module[{ord,t},
> ord = Ordering[s];
> t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
> t[[Ordering[ord]]]
> If there are a lot of repeated elements in the data, then treat seems to
be
> faster. On the other hand, if there aren't a lot of repeated elements,
then
> carl seems to be faster. It seems like it ought to be possible to compute
> Ordering[Ordering[data]] more quickly since
> Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I couldn't
> think of a way.
> Carl Woll
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
> hi. i'm a lisper/schemer and i'm working with mathematica. i
> appreciate the lisp-like nature of mathematica but i can't seem to
> easily replicate some of the functionality i like which is forcing me to
> write ugly side-effect code.
> for instance, how do you do the equivalent of a named let in
> mathematica (NOTE! I know i can take the max of a list, this is just a
> simple example of a named let)
> (define (max-of-list lst)
> (let loop ((lst (cdr lst))
> (best (car lst)))
> (if (null? lst)
> best
> (loop (cdr lst)
> (if (> (car lst) best)
> (car lst)
> best)))))
> (max-of-list '(1 2 3 4 5 2))
>> 5
> Here is a mathematica function to compress a sequence numerically.
> here is one attempt using functions where i pass the function to
> itself... there has to be a better way
> CompressNumericalSequence[S_] := Module[
> {C = Function[{C, R, i},
> If[i < Max[R],
> If[Length[Position[R, i]] == 0,
> C[C, (If[# > i, # - 1, #]) & /@ R, i],
> C[C, R, i + 1]],
> R]]},
> C[C, S, 1]];
> CompressNumericalSequence[{10, 2, 4, 7, 8}]
> {5, 1, 2, 3, 4}
> Also, is it possible to do letrec in mathematica? (essentially, i know
> i can do recursive function declarations at the top level... my question
> is whether i can do them at lower levels?)...
Much of the required information can be found in the Mathematica help,
Book: The initial values are always evaluated before the module is
executed. So you could say that
Module[{fact = If[# == 0, 1, #*fact[# - 1]]&}, fact[5]]
works like let, returning 5*fact[4], and
Module[{fact}, fact = If[# == 0, 1, #*fact[# - 1]]&; fact[5]]
works like letrec, returning 120. This is the problem with your definition
of CompressNumericalSequence -- you simply have to move the assignment to
C to the module body, and then you won't need to pass C to itself.
(Another way is to use #0, defining fact as fact = If[# == 0, 1, #*#0[# -
1]]&).
The example with named let can be rewritten like this:
maxoflist = Function[lst,
Module[{loop},
loop = Function[{lst, best},
best,
loop[Rest@ lst,
If[First@ lst > best, First@ lst, best]]
]];
loop[Rest@ lst, First@ lst]
]]
A more convenient way can be to write
maxoflist[$lst_] := Module[{loop},
loop[lst_, best_] :=
best,
loop[Rest@ lst,
If[First@ lst > best, First@ lst, best]]
];
loop[Rest@ $lst, First@ $lst]
]
In this notation you cannot use the same argument name lst for both
maxoflist and loop though. Also $lst cannot be used even as a local
variable name, so you cannot write something like
add1[x_] := Module[{helper, val = x},
helper[val_] := Module[{x = 1}, val + x];
helper[val]
]
because add1[a] will work but add1[1] won't. I have read that there were
arguments both in favor and against this model where rules can just
replace Module local variables; personally I cannot see any convincing
arguments in favor of it, and at the same time the drawbacks are quite
clear -- creating nested definitions becomes a pain.
It is also possible to create something similar to function closures:
In[1]:= f = Module[{x1 = 1, x2 = 1, y}, y = x2++&; y[]&]
Out[1]= y$16[]&
In[2]:= f[]
Out[2]= 1
In[3]:= f[]
Out[3]= 2
In[4]:= {x1$16, x2$16}
Out[4]= {x1$16, 3}
x2$16 still can be accessed directly, but the only 'honest' way to access
it is to call the function returned from Module, pretty much like Scheme
variables that are visible only to their closures. This is also a simple
way to emulate C-style static local variables. Incidentally, we can see
that x2$16 was retained after the evaluation of Module, because the top
level kept the link to y$16 and y$16 has a link to x2$16, but x1$16 was
discarded.
Maxim Rytin
m.r@inbox.ru
===
Subject: Re: letrec/named let
>> ......
>>Neither of these functions do what CompressNumericalSequence does... And
>>because I'm no mathematica-guru I don't yet quite understand what they
>>do, anyway. Furthermore, they are actually SLOWER than my code (see
>>below on a test of 1,000,000 entries).
>>dan
Hello Dan,
somehow the test got lost?! So I've run one on my machine
In[1]:= $Version
your original code:
In[2]:=CompressNumericalSequence[S_] :=
Module[{C = Function[{C, R, i}, If[i < Max[R],
If[Length[Position[R, i]] == 0,
C[C, (If[#1 > i, #1 - 1, #1] & ) /@ R, i], C[C, R, i + 1]],
R]]}, C[C, S, 1]];
my code:
In[3]:=
CNS[lst_] := Module[{tmp = First /@ Split[Sort[lst]]},
Flatten[(Position[tmp, #1] & ) /@ lst]]
2 things are missing: a supercomputer and patience.
The test will run on only 10^5 elements.
In[4]:= test = Table[Random[Integer, {1, 5000}], {100000}];
In[5]:= First[Timing[t1 = CompressNumericalSequence[test]; ]]
$IterationLimit::itlim Iteration limit of 4096 exceeded. More...
Out[5]= 25.25*Second
We have to do a special setting to run your code
In[6]:=Block[{$IterationLimit = Infinity},
First[Timing[t1 = CompressNumericalSequence[test]; ]]]
Out[6]= 93.375*Second
In[7]:= $IterationLimit (* is 4096 again *)
Out[7]= 4096
In[8]:= First[Timing[t2 = CNS[test]; ]]
Out[8]= 15.407*Second
test the equality of the results:
In[9]:= t1 == t2
Out[9]= True
Well, I see: it's SLOWER, the result is completely different and I don't
have to tweak Mathematica's default settings to run it at all.
These are three major disadvantages.
Sorry for bothering you,
Peter
===
Subject: Re: letrec/named let
Our solutions agree on lists of strictly positive integers, but timings
depend a great deal on the minimum value:
Clear[treat, andrzej]
treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u -> Range@
Length@u]]
andrzej[s_List] :=
First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
Last[#] < Max[First[#]] &]
data=1+RandomArray[PoissonDistribution[2],{10^6}];
Timing[one=treat@data;]
Timing[two=andrzej@data;]
one == two
{0.593 Second,Null}
{0.657 Second,Null}
True
data = 10 + RandomArray[PoissonDistribution[2], {10^6}];
Timing[one = treat@data;]
Timing[two = andrzej@data;]
one == two
{0.61 Second,Null}
{13.671 Second,Null}
True
This is all it takes to fix that problem, however:
andrzej2[s_List] := andrzej[s + (1 - Min@s)]
data = 10 + RandomArray[PoissonDistribution[2], {10^6}];
Timing[one = treat@data;]
Timing[two = andrzej@data;]
Timing[three = andrzej2@data;]
one == two == three
{0.594 Second,Null}
{13.64 Second,Null}
{0.657 Second,Null}
True
As a side-effect, andrzej2 agrees with my solution on arbitrary lists of
integers:
data = -7 + RandomArray[PoissonDistribution[2], {10^6}];
Timing[one = treat@data;]
Timing[two = andrzej@data;]
Timing[three = andrzej2@data;]
one == two
one == three
{0.578 Second,Null}
{0.218 Second,Null}
{0.61 Second,Null}
False
True
Bobby
> Unfortunately I really can't afford to spend as much time on such
> questions as i would like but I hope someone else will take up this
> discussion. Let me just quickly point out you can eliminate the use of
> With, First and Last in various ways, for example:
> CompressNumericalSequence2[s_] := First[NestWhile[ Apply[If[FreeQ[##],
> {#1 /.
> x_ /; x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1}, Last[#] <
> Max[First[#]] &]]
> I did not bother eliminating Last and First in the test function but it
> can of course done in the same way.
> But the main point was quite different. First, there is an almost
> countless number of ways to program this sort of thing in Mathematica
> and if other participants in the MathGroup get interested in this
> question you will see lots of very different solutions, some of which
> will no doubt be faster and more elegant than the one I came up with
> (though your formulation of your question in terms of Lisp will
> probably discourage some).
> Secondly, while it is possible to program recursively in Mathematic it
> is not in general the best way, which I am sure you will find out for
> yourself if you continue doing so...
> Andrzej
>> Thought you'd be interested to know that if I simply replace my
>> First[Position[...]] test with your FreeQ test then the performance of
>> our different versions match each other. First[Position is slow and
>> this makes sense as FreeQ can stop immediately upon finding a match
>> while First[Position enumerates all matches and therefore takes time
>> linear in the length of the sequence (while FreeQ's runtime is also
>> linear but on the average case depends probabilistically on the length
>> of the sequence, query number and the distribution of numbers in the
>> list... my guess is that its sub-linear on average)
>> Here is the version simply replacing First[Position with FreeQ and some
>> timing results using 10,000,000 entries.
>> CompressNumericalSequence[S_] := Module[
>> {C = Function[{C, R, i},
>> If[i < Max[R],
>> If[FreeQ[R, i],
>> C[C, (If[# > i, # - 1, #]) & /@ R, i],
>> C[C, R, i + 1]],
>> R]]},
>> C[C, S, 1]];
>> ...(using 10^7 test entries)...
>> ans1=CompressNumericalSequence1[test];//Timing
>> ans2=CompressNumericalSequence[test];//Timing
>> ans1 == ans2
>> {29.4135 Second,Null}
>> {29.3635 Second,Null}
>> True
>> The remaining difference is how we transform 'a' (or R in my case). It
>> appears that your (a /. x ......) rule is about as fast as mine as
>> replacing them results in no statistically significant difference in
>> timing.
>> There are two things that bug me. 1) Your version requires these
>> First[#] and Last[#] constructs in order to grab the parameters. To
>> me,
>> this is not only asthetically disastisfying, but, more practically,
>> hard
>> to maintain/reason about with more and more parameters. Yes, you can
>> use a With like you have, but this sort of construct should be built
>> into the language. Can you extend the language in such a way? Second,
>> my version, while it has variable names and scoping, requires that I
>> pass in a copy of itself which seems unreasonable.
>> The issue is that the variables being defined in a scope like a
>> With/Module are not available to each other before the body. So
>> With/Module is more like let then letrec. Here is a version
that
>> does away with passing itself but I had to move the function definition
>> into the body in order to accomplish this (this version ran in about
>> the
>> same time as the above version).
>> CompressNumericalSequence[S_] := Module[
>> {C},
>> C = Function[{R, i},
>> If[i < Max[R],
>> If[FreeQ[R, i],
>> C[R /. x_ /; x > i :> x - 1, i],
>> C[R, i + 1]],
>> R]];
>> C[S, 1]];
>> I suppose this is how I would write it in the future as I don't like
>> the
>> First/Last aspect of the NestWhile.
>> thoughts?
>> dan
>>> Unfortunately I am out of practice with Lisp. Let, I think,
>>> corresponds
>>> to Mathematica's With. I don't think there is anything that
>>> corresponds
>>> to named let or letrec but they are not needed. The reason is that
>>> the similarity between Mathematica and Lisp is very deceptive.
>>> Mathematica's syntax, or more correctly one part of its syntax does
>>> indeed resemble Lisp but the internals of the two languages are
>>> quite
>>> different. (The most important difference is that Mathematica's lists
>>> are arrays and Lisp style linked lists have to be explicitly formed
>>> and
>>> are not very easy to use.) You can program (with some effort) in
>>> Mathematica in Lisp style just as you can program in C style, but if
>>> you do so your programs will usually be not very efficient and in some
>>> cases unmanageably inefficient. I know because many years ago when I
>>> started to program in Mathematica I also attempted to program in Lisp
>>> style.
>>> Actually your program CompressNumericalSequence performs better than I
>>> would have expected but almost any program written in a more natural
>>> Mathematica style will outperform it. Here is a very casual attempt:
>>> CompressNumericalSequence1[s_] := First[NestWhile[With[{a = First[#],
>>> b
>>> =
>>> Last[#]}, If[FreeQ[a, b], {a /. x_ /;
>>> x > b :> x - 1, b}, {a, b + 1}]] &, {s, 1}, Last[#] <
>>> Max[First[#]] &]]
>>> test=Table[Random[Integer,20],{10^6}];
>>> ans1=CompressNumericalSequence1[test];//Timing
>>> {3.84 Second,Null}
>>> ans2=CompressNumericalSequence[test];//Timing
>>> {11.66 Second,Null}
>>> In[6]:=
>>> ans1==ans2
>>> True
>>> Andrzej Kozlowski
>>> Chiba, Japan
>>> http://www.akikoz.net/andrzej/index.html
>>> http://www.mimuw.edu.pl/~akoz/
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
> Our solutions agree on lists of strictly positive integers, but timings
> depend a great deal on the minimum value:
> Clear[treat, andrzej]
> treat[s_List] := Module[{u = Union@s}, s /. Dispatch@Thread[u -> Range@
> Length@u]]
> andrzej[s_List] :=
> First@NestWhile[Apply[If[FreeQ[##], {#1 /. x_ /;
> x > #2 :> x - 1, #2}, {#1, #2 + 1}] &, #] &, {s, 1},
> Last[#] < Max[First[#]] &]
> data=1+RandomArray[PoissonDistribution[2],{10^6}];
> Timing[one=treat@data;]
> Timing[two=andrzej@data;]
> one == two
> {0.593 Second,Null}
> {0.657 Second,Null}
> True
Another possibility modeled after DrBob's first answer is the following:
carl[s_] := Module[{ord,t},
ord = Ordering[s];
t = FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
t[[Ordering[ord]]]
]
If there are a lot of repeated elements in the data, then treat seems to be
faster. On the other hand, if there aren't a lot of repeated elements, then
carl seems to be faster. It seems like it ought to be possible to compute
Ordering[Ordering[data]] more quickly since
Ordering[Ordering[Ordering[data]]] equals Ordering[data], but I couldn't
think of a way.
Carl Woll
===
Subject: Debugging
In earlier emails I explained, that the program is finished at the
statement
in error, if the following assignmet is included in the program:
$MessagePrePrint = (Print[Error -> Quit]; Quit[])&;
My question is:
Can it be achieved, that the error message is printed, before the program
finishes?
Hermann Schmitt
===
Subject: Re: Debugging
> In earlier emails I explained, that the program is finished at the
statement
> in error, if the following assignmet is included in the program:
> $MessagePrePrint = (Print[Error -> Quit]; Quit[])&;
> My question is:
> Can it be achieved, that the error message is printed, before the program
> finishes?
> Hermann Schmitt
I did manage this in my debugger, M-Debug, but that was in a program
executing under TraceScan.
David Bailey
http://www.dbaileyconsultancy.co.uk
===
Subject: RC circuit
Two ?'s I am trying to solve a simple RC circuit. ?1 I can's use the
letter i for current, is there a way around this ?
?2. How do I solve this equation for dv/dt, eq: i = v/r + C dv/dt
===
Subject: meaning of a * in search string?
hi;
I have thought that a * is a wild card for string matching,
which is supposed to match anything including the empty string?
Then, can someone explain why `* does not produce
anything But `*`* does below?
Names[SignalProcessing`Analog`*]
{}
Names[SignalProcessing`Analog`*`*]
{SignalProcessing`Analog`Fourier`Private`a,
etc...
The above makes no sense to me at all. This is the
help on Names:
?Names
Names[string] gives a list of the names of
symbols which match the string.
so, then means the string abs`* should generate a result,
if the string abc`*`* does , which it did !
Does Mathematica uses a different definition of the * for
string matching than the one we learned in school?
Steve
===
Subject: Re: Problem with substitutions in SparseArray?
> Substitutions don't seem to work in SparseArrays
> try:
> m = SparseArray[{{a,0},{0,0}}];
> ms = m/.a->1.;
> Normal[ms]
> RESULT:
> {{a, 0}, {0, 0}}
The reason is that SparseArrays are atoms:
m=SparseArray[{{a,0},{0,0}}];
AtomQ[m]
True
You cannot change the structure of atoms by applying rules to their
components because Atoms do not have components. Compare thsi with,
for example
Complex[2,3]/.(2->5)
2+3 I
(Please no more discussions whether complex numbers are really atoms
or not).
To apply a rule to an atom you have to apply it to the whole thing,
e.g.
(n = m /. HoldPattern[SparseArray[x__, {y__, {a}}]] :>
SparseArray[x, {y, {1}}])//InputForm
SparseArray[Automatic, {2, 2}, 0, {1, {{0, 1, 1}, {{1}}},
{1}}]
This is of course not very convenient, but fortunately there is an easy
way to do the same thing:
n=SparseArray[ArrayRules[m]/.a->1]//InputForm
SparseArray[Automatic, {1, 1}, 0, {1, {{0, 1}, {{1}}},
{1}}]
Andrzej Kozlowski
Chiba, Japan
http://www.akikoz.net/andrzej/index.html
http://www.mimuw.edu.pl/~akoz/
===
Subject: Problem with substitutions in SparseArray?
Substitutions don't seem to work in SparseArrays
try:
m = SparseArray[{{a,0},{0,0}}];
ms = m/.a->1.;
Normal[ms]
RESULT:
{{a, 0}, {0, 0}}
===
Subject: Re: NSum: badly missed Option
Try SetOptions on NIntegrate before the NSum call.
> I just had occasion to use NSum with a summation from n = 1 to Infinity.
> Works well except that I am getting this well-known complaint from
> NIntegrate (which is used in the summation method):
> NIntegrate::ncvb: NIntegrate failed to converge to prescribed
accuracy
> after 7 recursive bisections in x
> The problem is that the usual fix for this, boosting MaxRecursion,
> can't be used because MaxRecursion is *not an option* for NSum.
> Shouldn't this standard NIntegrate option be available here?
> alan
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: NSum: badly missed Option
> Try SetOptions on NIntegrate before the NSum call.
alan
===
Subject: NSum: badly missed Option
I just had occasion to use NSum with a summation from n = 1 to Infinity.
Works well except that I am getting this well-known complaint from
NIntegrate (which is used in the summation method):
NIntegrate::ncvb: NIntegrate failed to converge to prescribed
accuracy
after 7 recursive bisections in x
The problem is that the usual fix for this, boosting MaxRecursion,
can't be used because MaxRecursion is *not an option* for NSum.
Shouldn't this standard NIntegrate option be available here?
alan
===
Subject: Re: Mathematica Notebook Organiztion
It would be in everyones interest if wolfram adoped an XML schema
for storing notebooks in instead of the (unparsable) format that
they current use. At the moment notebooks can only be used by
mathematica, which limits their use as a useful publishing form.
Joe
> WRI, please take notice. David is truly a guru on visual presentation and
organization of information.
> Bobby
> On Fri, 6 May 2005 03:01:35 -0400 (EDT), David Park
> > I've made this a new topic because we have rather drifted off from the
> > subject of writing packages to the subject of using notebooks in the
best
> > manner.
> > It is my view that Mathematica notebooks (and similar such entities) are
an
> > because of the ability to interactively meld text, calculations,
graphics
> > and animations in one document. Theodore Gray deserves a lot of credit
for
> > his work on this concept. We are still learning how to use this media.
But
> > things are not perfect yet and Professor Siegman has touched on some
issues.
--
Josef Karthauser (joe@tao.org.uk) http://www.josef-k.net/
FreeBSD (cvs meister, admin and hacker) http://www.uk.FreeBSD.org/
===
Subject: Re: Mathematica Notebook Organiztion
WRI, please take notice. David is truly a guru on visual presentation and
organization of information.
Bobby
> I've made this a new topic because we have rather drifted off from the
> subject of writing packages to the subject of using notebooks in the best
> manner.
> It is my view that Mathematica notebooks (and similar such entities) are
an
> because of the ability to interactively meld text, calculations, graphics
> and animations in one document. Theodore Gray deserves a lot of credit
for
> his work on this concept. We are still learning how to use this media.
But
> things are not perfect yet and Professor Siegman has touched on some
issues.
> There is no reason that the Initialization and Routines Sections couldn't
be
> at the end of the notebook. The Input cells in these Sections should be
made
> into Initialization cells (and choose NOT to save as an AutoSave
package).
> That way one doesn't have to necessarily evaluate a notebook from the
top.
> The initializations are automatically performed when the first statement,
> anywhere, is evaluated. I like to make my notebooks such that a reader
can
> start at any Section and begin evaluating. If this is not possible
because
> of a rigid progression in the sections then the reader should be so
> instructed.
> Often I will select the Initialization and Routine section headings and
> change the FontColor to Gray. I also often add Automatically
Initialized.
> This subdues the sections and tells the reader he can generally ignore
them.
> Sections are not automatically opened when Initialization cells are
> evaluated. My experience is that the sections remain closed. Also you can
> select a Section and completely evaluate it without ever opening it, or
> seeing the results. (I've had super geniuses complain that they evaluated
my
> notebook but got no results, simply because they didn't know how to open
> Sections!)
> Graphics code can be put in closed cells in the running sections. It
doesn't
> necessarily have to be put in the Routines section. That way you can
> intermix text, calculations and graphics in a smooth manner. The only
> problem is getting the reader to evaluate the closed cells, even if it
has
> been carefully explained in an Introduction. They are so thin and small
new
> readers often overlook them. It might be nice if one had the option of
> having a closed cell display a cell tag. It would also be nice if closed
> cells could be opened and closed in the same way as Sections.
> It is also possible to generate proofs, derivations or step by step
> calculations by interspersing Print statements with %% referenced
> statements. These can also be put in closed cells so that the main code
is
> hidden.
> For printing (It will take some time for people to give up the security
> blanket of printed documents - inferior as they are!) there is no reason
why
> some Sections can't be open and others closed.
> Professor Siegman's case:
> Section
> Text (a few paragraphs introducing the section)
> Subsection
> Subsection
> is a good point. I don't see any direct way around it other than making
the
> Text an Introductory Subsection, which may be objectionable because it is
so
> short, or manually closing these Text cells, but this is too difficult
for
> the reader to work with. Perhaps there might be a FrontEnd command that
> gives the outline view.
> Another approach would be to make a Table of Contents Section. The
various
> items in the Table of Contents could actually be links to the
corresponding
> sections of the notebook. This is like pdf documents where there is often
a
> table of contents with links in the side bar. It requires extra work to
> write the sections, but then it also requires extra work in a pdf
document.
> It would also be nice to have the following construction:
> Section
> Text and Input cells
> BoxSection
> Text and Input cells
> End of BoxSection marker
> Text and Input cells
> where the BoxSection could be closed or terminated, and subsequent Text
and
> Input cells would NOT be part of the BoxSection, but part of the
containing
> section. The BoxSections would be like boxes in textbooks which contain a
> side discussion without interrupting the main flow of material. (Possibly
> there could be a way to have manual grouping only in some subsection of a
> notebook, but I would much prefer a more versatile automatic grouping
> because manual grouping is too subject to abuse.)
> I have only looked a little at the Author's Tools application. It does
give
> information about constructing Help documentation, which I omitted to
> mention in an earlier posting. But otherwise I haven't figured out just
what
> Author's Tools does for one in the way of constructing better notebooks
for
> readers. I wish WRI had provided a short elegant example with the
> application.
> It might also be nice to have the ability to construct stand alone
browsers.
> Then the categories in the browser would be like the table of contents.
In
> essence, authors would write Mathematica browsers, in which Mathematica
> notebooks formed the various chapters and sections.
> I wish that there were better standard notebook styles supplied with
> Mathematica. I find many of the standard ones useless. WRI needs to hire
> Edward Tufte, or someone equivalent, to design some notebook styles. It
> certainly is preferable to use a standard style because then one can
count
> on readers having it.
> I would like to see one more Section level in notebooks. I would like to
see
> the default to have GroupOpenCloseIcons on all the Section levels - but
NOT
> on anything else. (Especially not on Input/Output groups.) The triangular
> open/close icons are intuitive to new readers - the cell brackets are not.
I
> would like to see a better balance, actually a smaller range, of font
sizes.
> In the Default style, for instance, I think the Title font size is much
too
> large, and the Text font size is too small. The Text, Input and Output
font
> sizes should be reasonably close in size. After all, text cells and
> Input/Output cells are of equal importance (IMHO) and should better
blend.
> and text have roughly comparable font sizes.
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
> Agreed, this is the sensible way to [include routines in notebooks], and
how
> I generally do it.
> But two gripes about the result:
> 1) In PhD dissertations, journal papers, books, reports, the (sometimes
> lengthy) Routines are most commonly are sent to the end, e.g. are
> stuck in Appendices, and the Initialization (or Introduction) section is
> immediately followed by the important (to the reader) sections such as
> Calculations and Results. Among other things that lets you easily
> select and print the Introduction, the Calculations and the Results to
> toss in a file folder or (three-hole-type) notebook, leaving off the
> lengthy Routines stuff.
> Mathematica doesn't make it easy to organize its notebooks that way.
> 2) In my (limited) experience if I use Automatic Grouping and try to
> close groups to see only the section headings (to get an overview of the
> notebook structure and faster scrolling to , this doesn't work right
> (i.e., the way I want it!) unless the cell structure is strictly
> hierarchical. E.g., if I have repeated cell sequences in the form
> Section
> Text (a few paragraphs introducing the section)
> Subsection
> Subsection
> closing these groups so I'll see just the Section headings does not
> close the Text cells, although it does close the Subsections (maybe I'm
> not doing things right?).
> Also, closing the Routines section, then running the notebook from the
> top (to get a fresh start) opens the Routines section, doesn't it?
--
DrBob@bigfoot.com
===
Subject: Re: Mathematica Notebook Organiztion
I would like to second most of what David Park said.
In addition, I hope that one day the internet will run on an
_executable_ markup language. XHTML and CSS are great for specifying
static documents, but they lack the power of executing code on the end
user's machine. That is why, for instance, GMail is written (in part)
in Java script.
Mathematica is most of the way there, because the same markup language
that describes text layout and formatting is the language used to
execute (inter)active content.
(The first 3 of the following ideas may not presently be important in
Mathematica, but what about Publicon?:)
1. Mathematica should be able to flow text (and graphics and formulas)
from one arbitrary point to another and to fit text within any defined
border or region. This would make it possible to set up columns and
pages. Think of a printed three column page layout with a heading
along the top.
2. Mathematica needs to be able to handle multiple text flows. This
would make it possible to have side bars and navigation menus
embedded into a document.
3. Style sheets should be able to exclude some flows from presentation
in certain mediums. For instance, the navigation and sidebar menus
would be unneeded in a printed version, but a table of contents and
footers might be needed.
4. Mathematica's language should be easily embedable in a defined
region of an XHTML document and MathReader should be implemented as a
browser extension that will render into that defined region. (Note how
people don't mind subtle Flash animations, but that loading pdf files
can be really annoying to them.)
> I've made this a new topic because we have rather drifted off from the
> subject of writing packages to the subject of using notebooks in the best
> manner.
> It is my view that Mathematica notebooks (and similar such entities) are
an
> because of the ability to interactively meld text, calculations, graphics
> and animations in one document. Theodore Gray deserves a lot of credit
for
> his work on this concept. We are still learning how to use this media.
But
> things are not perfect yet and Professor Siegman has touched on some
issues.
> There is no reason that the Initialization and Routines Sections couldn't
be
> at the end of the notebook. The Input cells in these Sections should be
made
> into Initialization cells (and choose NOT to save as an AutoSave
package).
> That way one doesn't have to necessarily evaluate a notebook from the
top.
> The initializations are automatically performed when the first statement,
> anywhere, is evaluated. I like to make my notebooks such that a reader
can
> start at any Section and begin evaluating. If this is not possible
because
> of a rigid progression in the sections then the reader should be so
> instructed.
> Often I will select the Initialization and Routine section headings and
> change the FontColor to Gray. I also often add Automatically
Initialized.
> This subdues the sections and tells the reader he can generally ignore
them.
> Sections are not automatically opened when Initialization cells are
> evaluated. My experience is that the sections remain closed. Also you can
> select a Section and completely evaluate it without ever opening it, or
> seeing the results. (I've had super geniuses complain that they evaluated
my
> notebook but got no results, simply because they didn't know how to open
> Sections!)
> Graphics code can be put in closed cells in the running sections. It
doesn't
> necessarily have to be put in the Routines section. That way you can
> intermix text, calculations and graphics in a smooth manner. The only
> problem is getting the reader to evaluate the closed cells, even if it
has
> been carefully explained in an Introduction. They are so thin and small
new
> readers often overlook them. It might be nice if one had the option of
> having a closed cell display a cell tag. It would also be nice if closed
> cells could be opened and closed in the same way as Sections.
> It is also possible to generate proofs, derivations or step by step
> calculations by interspersing Print statements with %% referenced
> statements. These can also be put in closed cells so that the main code
is
> hidden.
> For printing (It will take some time for people to give up the security
> blanket of printed documents - inferior as they are!) there is no reason
why
> some Sections can't be open and others closed.
> Professor Siegman's case:
> Section
> Text (a few paragraphs introducing the section)
> Subsection
> Subsection
> is a good point. I don't see any direct way around it other than making
the
> Text an Introductory Subsection, which may be objectionable because it is
so
> short, or manually closing these Text cells, but this is too difficult
for
> the reader to work with. Perhaps there might be a FrontEnd command that
> gives the outline view.
> Another approach would be to make a Table of Contents Section. The
various
> items in the Table of Contents could actually be links to the
corresponding
> sections of the notebook. This is like pdf documents where there is often
a
> table of contents with links in the side bar. It requires extra work to
> write the sections, but then it also requires extra work in a pdf
document.
> It would also be nice to have the following construction:
> Section
> Text and Input cells
> BoxSection
> Text and Input cells
> End of BoxSection marker
> Text and Input cells
> where the BoxSection could be closed or terminated, and subsequent Text
and
> Input cells would NOT be part of the BoxSection, but part of the
containing
> section. The BoxSections would be like boxes in textbooks which contain a
> side discussion without interrupting the main flow of material. (Possibly
> there could be a way to have manual grouping only in some subsection of a
> notebook, but I would much prefer a more versatile automatic grouping
> because manual grouping is too subject to abuse.)
> I have only looked a little at the Author's Tools application. It does
give
> information about constructing Help documentation, which I omitted to
> mention in an earlier posting. But otherwise I haven't figured out just
what
> Author's Tools does for one in the way of constructing better notebooks
for
> readers. I wish WRI had provided a short elegant example with the
> application.
> It might also be nice to have the ability to construct stand alone
browsers.
> Then the categories in the browser would be like the table of contents.
In
> essence, authors would write Mathematica browsers, in which Mathematica
> notebooks formed the various chapters and sections.
> I wish that there were better standard notebook styles supplied with
> Mathematica. I find many of the standard ones useless. WRI needs to hire
> Edward Tufte, or someone equivalent, to design some notebook styles. It
> certainly is preferable to use a standard style because then one can
count
> on readers having it.
> I would like to see one more Section level in notebooks. I would like to
see
> the default to have GroupOpenCloseIcons on all the Section levels - but
NOT
> on anything else. (Especially not on Input/Output groups.) The triangular
> open/close icons are intuitive to new readers - the cell brackets are not.
I
> would like to see a better balance, actually a smaller range, of font
sizes.
> In the Default style, for instance, I think the Title font size is much
too
> large, and the Text font size is too small. The Text, Input and Output
font
> sizes should be reasonably close in size. After all, text cells and
> Input/Output cells are of equal importance (IMHO) and should better
blend.
> and text have roughly comparable font sizes.
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
> Agreed, this is the sensible way to [include routines in notebooks], and
how
> I generally do it.
> But two gripes about the result:
> 1) In PhD dissertations, journal papers, books, reports, the (sometimes
> lengthy) Routines are most commonly are sent to the end, e.g. are
> stuck in Appendices, and the Initialization (or Introduction) section is
> immediately followed by the important (to the reader) sections such as
> Calculations and Results. Among other things that lets you easily
> select and print the Introduction, the Calculations and the Results to
> toss in a file folder or (three-hole-type) notebook, leaving off the
> lengthy Routines stuff.
> Mathematica doesn't make it easy to organize its notebooks that way.
> 2) In my (limited) experience if I use Automatic Grouping and try to
> close groups to see only the section headings (to get an overview of the
> notebook structure and faster scrolling to , this doesn't work right
> (i.e., the way I want it!) unless the cell structure is strictly
> hierarchical. E.g., if I have repeated cell sequences in the form
> Section
> Text (a few paragraphs introducing the section)
> Subsection
> Subsection
> closing these groups so I'll see just the Section headings does not
> close the Text cells, although it does close the Subsections (maybe I'm
> not doing things right?).
> Also, closing the Routines section, then running the notebook from the
> top (to get a fresh start) opens the Routines section, doesn't it?
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Re: Mathematica Notebook Organiztion
To Tony Siegman, Stanford University:
> . . .the same markup language that describes text layout and
> formatting [being] the language [that is ] used to [do calculations,
> create graphics, and] execute (inter)active content . .
> count me as skeptical -- VERY skeptical. This is a very BAD idea, that
> will inevitably cause much more damage than the dubious and limited
> benefits it may produce.
> Basically, I'd argue that attempting to combine both of these quite
> different functions into a single language or package and a single user
> interface is an absolute guarantee that the language and the system and
> the interface will all become so complex, so convoluted, so hard to
> learn and use and remember between uses, that ordinary users (meaning,
> e.g., ordinary working engineering and science professionals) will
> abandon such a system for simpler individual tools with easily
> interchangable file formats which will enable them perform these two
> separate functions separately, much more easily, with much less of a
> learning curve, and with enormously less aggravation.
Are you saying that most ordinary users have abandoned Mathematica?
Mathematica already describes notebooks in the same language that it
takes commands. Take a look at the raw format of a Mathematica style
sheet or a regular Mathematica notebook.
Notice how the first parts are commented out like this: (*stuff*)
Notice that the entire content section of the stylesheet/notebook is
actually one expression that looks like this: Notebook[stuff]
I would guess that even the slightly fancy Publicon notebooks are laid
out in the same manner.
It seems to me that WRI uses the Mathematica language to describe
notebooks. Do you agree?
Compare that to the current system for web content: First one needs
Apache to serve web pages. Then one needs CSS (which has a different
syntax than XHTML) to format XHTML. Usually there could be PHP for
dynamically generating the server's XHTML (perhaps a list of updated
urls for a sidebar) and JavaScript for client side scripting.
It would be very attractive to me to eliminate the need for using many
different languages to deliver rich web content. Mathematica can
already do animations, scripting, and dynamic code generation
(NotebookWrite). Given the above, why do you think it's such a bad
idea to use Mathematica for the web?
> > In addition, I hope that one day the internet will run on an
> > _executable_ markup language. XHTML and CSS are great for specifying
> > static documents, but they lack the power of executing code on the end
> > user's machine. That is why, for instance, GMail is written (in part)
> > in Java script.
> > Mathematica is most of the way there, because the same markup language
> > that describes text layout and formatting is the language used to
> > execute (inter)active content.
> > WRI, please take notice. David is truly a guru on visual presentation
and
> > organization of information.
> > Bobby
> I don't claim to be a guru on any of this, but I do claim to have a
> very large amount of ordinary user experience (multiple decades of
> experience) with (a) markup systems for presentation of technical
> material (books, reports, class notes, seminar slides), and (b) software
> for extensive numerical and symbolic computation and preparation of
> graphics.
> Based on this long experience, when I read about
> . . .the same markup language that describes text layout and
> formatting [being] the language [that is ] used to [do calculations,
> create graphics, and] execute (inter)active content . .
> count me as skeptical -- VERY skeptical. This is a very BAD idea, that
> will inevitably cause much more damage than the dubious and limited
> benefits it may produce.
> Basically, I'd argue that attempting to combine both of these quite
> different functions into a single language or package and a single user
> interface is an absolute guarantee that the language and the system and
> the interface will all become so complex, so convoluted, so hard to
> learn and use and remember between uses, that ordinary users (meaning,
> e.g., ordinary working engineering and science professionals) will
> abandon such a system for simpler individual tools with easily
> interchangable file formats which will enable them perform these two
> separate functions separately, much more easily, with much less of a
> learning curve, and with enormously less aggravation.
> I think this concept of having such a single, universal language keeps
> emerging (mostly among computer types?) because it poses real and
> difficult and very interesting intellectual challenges to computer types
> just to accomplish this -- and that's fine; intellectual challenges are
> what creative people live for, and no one can blame or criticize
> computer types for being challenged by these goals.
> The problem is, the *only* advantage of such a unified tool *for the
> user*, so far as I can see, is that you only have to double-click on one
> icon to start if up; and the difficulties it then produces for ordinary
> users are immense and unending. I won't attempt to list at this point
> all the different ways these difficulties arise (inherently, and
> unavoidably) in such a system; but if this debate continues I may get
> motivated to respond with such a list.
> I love Mathematica, I love TeX, I'm very fond of Acrobat and Illustrator
> and BBEdit and . . . but the more people try to stuff the capabilities
> of all of these into Mathematica, the surer I am that this is a terrible
> idea.
> --Tony Siegman, Stanford University
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Re: Mathematica Notebook Organiztion
> The problem is, the *only* advantage of such a unified tool *for the
> user*, so far as I can see, is that you only have to double-click on
> one
> icon to start if up; and the difficulties it then produces for ordinary
> users are immense and unending. I won't attempt to list at this point
> all the different ways these difficulties arise (inherently, and
> unavoidably) in such a system; but if this debate continues I may get
> motivated to respond with such a list.
I think the user you are talking about does not do any programming in
TeX or PostScript (and perhaps not that much in Mathematica) and relies
on others: Texperts and various kind of gurus to write packages and
programs that make these general typesetting and graphic systems work
in a way that suits his particular needs. Although I spend most of my
time nowadays in front of my Powerbook I think I do not qualify as a
computer type (I am a professor of mathematics) but I think it would
be great if I could dispense with the services of Texperts and
PostSctipt gurus and be able to do everything I need to do using
Mathematica - the one programming language I know pretty well. Whether
this hope is realistic I can't really tell, but I am keenly awaiting
Mathematica 6, which, I think, at last will dispense with PostScript
and allow full control of text and graphics form within the Kernel.
Andrzej Kozlowski
===
Subject: Re: Mathematica Notebook Organiztion
> In addition, I hope that one day the internet will run on an
> _executable_ markup language. XHTML and CSS are great for specifying
> static documents, but they lack the power of executing code on the end
> user's machine. That is why, for instance, GMail is written (in part)
> in Java script.
> Mathematica is most of the way there, because the same markup language
> that describes text layout and formatting is the language used to
> execute (inter)active content.
> WRI, please take notice. David is truly a guru on visual presentation and
> organization of information.
> Bobby
I don't claim to be a guru on any of this, but I do claim to have a
very large amount of ordinary user experience (multiple decades of
experience) with (a) markup systems for presentation of technical
material (books, reports, class notes, seminar slides), and (b) software
for extensive numerical and symbolic computation and preparation of
graphics.
Based on this long experience, when I read about
. . .the same markup language that describes text layout and
formatting [being] the language [that is ] used to [do calculations,
create graphics, and] execute (inter)active content . .
count me as skeptical -- VERY skeptical. This is a very BAD idea, that
will inevitably cause much more damage than the dubious and limited
benefits it may produce.
Basically, I'd argue that attempting to combine both of these quite
different functions into a single language or package and a single user
interface is an absolute guarantee that the language and the system and
the interface will all become so complex, so convoluted, so hard to
learn and use and remember between uses, that ordinary users (meaning,
e.g., ordinary working engineering and science professionals) will
abandon such a system for simpler individual tools with easily
interchangable file formats which will enable them perform these two
separate functions separately, much more easily, with much less of a
learning curve, and with enormously less aggravation.
I think this concept of having such a single, universal language keeps
emerging (mostly among computer types?) because it poses real and
difficult and very interesting intellectual challenges to computer types
just to accomplish this -- and that's fine; intellectual challenges are
what creative people live for, and no one can blame or criticize
computer types for being challenged by these goals.
The problem is, the *only* advantage of such a unified tool *for the
user*, so far as I can see, is that you only have to double-click on one
icon to start if up; and the difficulties it then produces for ordinary
users are immense and unending. I won't attempt to list at this point
all the different ways these difficulties arise (inherently, and
unavoidably) in such a system; but if this debate continues I may get
motivated to respond with such a list.
I love Mathematica, I love TeX, I'm very fond of Acrobat and Illustrator
and BBEdit and . . . but the more people try to stuff the capabilities
of all of these into Mathematica, the surer I am that this is a terrible
idea.
--Tony Siegman, Stanford University
===
Subject: Mathematica Notebook Organiztion
I've made this a new topic because we have rather drifted off from the
subject of writing packages to the subject of using notebooks in the best
manner.
It is my view that Mathematica notebooks (and similar such entities) are an
because of the ability to interactively meld text, calculations, graphics
and animations in one document. Theodore Gray deserves a lot of credit for
his work on this concept. We are still learning how to use this media. But
things are not perfect yet and Professor Siegman has touched on some
issues.
There is no reason that the Initialization and Routines Sections couldn't
be
at the end of the notebook. The Input cells in these Sections should be
made
into Initialization cells (and choose NOT to save as an AutoSave package).
That way one doesn't have to necessarily evaluate a notebook from the top.
The initializations are automatically performed when the first statement,
anywhere, is evaluated. I like to make my notebooks such that a reader can
start at any Section and begin evaluating. If this is not possible because
of a rigid progression in the sections then the reader should be so
instructed.
Often I will select the Initialization and Routine section headings and
change the FontColor to Gray. I also often add Automatically
Initialized.
This subdues the sections and tells the reader he can generally ignore
them.
Sections are not automatically opened when Initialization cells are
evaluated. My experience is that the sections remain closed. Also you can
select a Section and completely evaluate it without ever opening it, or
seeing the results. (I've had super geniuses complain that they evaluated
my
notebook but got no results, simply because they didn't know how to open
Sections!)
Graphics code can be put in closed cells in the running sections. It
doesn't
necessarily have to be put in the Routines section. That way you can
intermix text, calculations and graphics in a smooth manner. The only
problem is getting the reader to evaluate the closed cells, even if it has
been carefully explained in an Introduction. They are so thin and small new
readers often overlook them. It might be nice if one had the option of
having a closed cell display a cell tag. It would also be nice if closed
cells could be opened and closed in the same way as Sections.
It is also possible to generate proofs, derivations or step by step
calculations by interspersing Print statements with %% referenced
statements. These can also be put in closed cells so that the main code is
hidden.
For printing (It will take some time for people to give up the security
blanket of printed documents - inferior as they are!) there is no reason
why
some Sections can't be open and others closed.
Professor Siegman's case:
Section
Text (a few paragraphs introducing the section)
Subsection
Subsection
is a good point. I don't see any direct way around it other than making the
Text an Introductory Subsection, which may be objectionable because it is
so
short, or manually closing these Text cells, but this is too difficult for
the reader to work with. Perhaps there might be a FrontEnd command that
gives the outline view.
Another approach would be to make a Table of Contents Section. The various
items in the Table of Contents could actually be links to the corresponding
sections of the notebook. This is like pdf documents where there is often a
table of contents with links in the side bar. It requires extra work to
write the sections, but then it also requires extra work in a pdf document.
It would also be nice to have the following construction:
Section
Text and Input cells
BoxSection
Text and Input cells
End of BoxSection marker
Text and Input cells
where the BoxSection could be closed or terminated, and subsequent Text and
Input cells would NOT be part of the BoxSection, but part of the containing
section. The BoxSections would be like boxes in textbooks which contain a
side discussion without interrupting the main flow of material. (Possibly
there could be a way to have manual grouping only in some subsection of a
notebook, but I would much prefer a more versatile automatic grouping
because manual grouping is too subject to abuse.)
I have only looked a little at the Author's Tools application. It does give
information about constructing Help documentation, which I omitted to
mention in an earlier posting. But otherwise I haven't figured out just
what
Author's Tools does for one in the way of constructing better notebooks for
readers. I wish WRI had provided a short elegant example with the
application.
It might also be nice to have the ability to construct stand alone
browsers.
Then the categories in the browser would be like the table of contents. In
essence, authors would write Mathematica browsers, in which Mathematica
notebooks formed the various chapters and sections.
I wish that there were better standard notebook styles supplied with
Mathematica. I find many of the standard ones useless. WRI needs to hire
Edward Tufte, or someone equivalent, to design some notebook styles. It
certainly is preferable to use a standard style because then one can count
on readers having it.
I would like to see one more Section level in notebooks. I would like to
see
the default to have GroupOpenCloseIcons on all the Section levels - but NOT
on anything else. (Especially not on Input/Output groups.) The triangular
open/close icons are intuitive to new readers - the cell brackets are not.
I
would like to see a better balance, actually a smaller range, of font
sizes.
In the Default style, for instance, I think the Title font size is much too
large, and the Text font size is too small. The Text, Input and Output font
sizes should be reasonably close in size. After all, text cells and
Input/Output cells are of equal importance (IMHO) and should better blend.
and text have roughly comparable font sizes.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
Agreed, this is the sensible way to [include routines in notebooks], and
how
I generally do it.
But two gripes about the result:
1) In PhD dissertations, journal papers, books, reports, the (sometimes
lengthy) Routines are most commonly are sent to the end, e.g. are
stuck in Appendices, and the Initialization (or Introduction) section is
immediately followed by the important (to the reader) sections such as
Calculations and Results. Among other things that lets you easily
select and print the Introduction, the Calculations and the Results to
toss in a file folder or (three-hole-type) notebook, leaving off the
lengthy Routines stuff.
Mathematica doesn't make it easy to organize its notebooks that way.
2) In my (limited) experience if I use Automatic Grouping and try to
close groups to see only the section headings (to get an overview of the
notebook structure and faster scrolling to , this doesn't work right
(i.e., the way I want it!) unless the cell structure is strictly
hierarchical. E.g., if I have repeated cell sequences in the form
Section
Text (a few paragraphs introducing the section)
Subsection
Subsection
closing these groups so I'll see just the Section headings does not
close the Text cells, although it does close the Subsections (maybe I'm
not doing things right?).
Also, closing the Routines section, then running the notebook from the
top (to get a fresh start) opens the Routines section, doesn't it?
===
Subject: Re: named pattern variable scoped as global, should be local
Fred,
As usual very good and convincing analysis. As for whether this
behaviour constitutes a bug or not: I think this is probably one of
answer. It seems to me reasonable to define a bug as something in the
code that causes behaviour that is undesirable in a way that either not
realised by the programmer or could not have been avoided without
causing even more undesirable behaviour (this latter situation is quite
common in programs like Mathematica). I am now also inclined to think
that under the above definition this probably does not qualify as a
bug. It seems to me that this behaviour is mildly undesirable and
causes some difficulty to users, but probably it is a side effect of
something intentional. Still, I do not see any obvious benefits form $
not being appended when the RHS contains unscoped variables; maybe this
improves performance but the gain seems to be very slight. Perhaps the
reason lies in trying to be consistent with some more general
principles of scoping.
Unfortunately these principles do not appear to be clearly stated
anywhere, which of course is one reason why detective work like yours
is so valuable (as well as entertaining).
Andrzej
> Andrzej,
> I agree that it is at least very surprising that the following two
> commands
> produce different results:
> In[1]:=
> x=7;
> Module[{x}, z /. x_->2 x]
> Module[{x, y=1}, z /. x_->2 x y]
> Out[2]=
> 14
> Out[3]=
> 2 z
> Whether this has to be considered as a bug or not is a matter of taste.
> After all, these commands are pretty artificial.
> In the following I try to give a possible explanation of what is going
> on
> here. I start with the observation that when in a scoping construct an
> expression has to be evaluated that contains other variables than than
> those
> that will be scoped, a $-sign is appended to the scoped variables. A
> number
> is added only when the evaluation takes place. Have a look at the
> following
> example, with Module.
> In[4]:=
> Clear[f]; f[t_] = Hold[ Module[{x}, x+t]];
> f[3]
> ReleaseHold[%]
> Out[5]=
> Hold[Module[{x$},x$+3]]
> Out[6]=
> 3+x$18
> When the expression to be evaluated does not contain variables from
> outside
> the scoping construct, the precaution of appending a $-sign does not
> take
> place:
> In[7]:=
> Clear[f]; f[t_] = Hold[ Module[{x}, x]];
> f[3]
> ReleaseHold[%]
> Out[8]=
> Hold[Module[{x},x]]
> Out[9]=
> x$19
> It works exactly the same way with Rule instead of Module:
> In[10]:=
> Clear[f];
> f[t_] := Hold[x_ -> 2*x + t]
> f[3]
> Out[12]=
> Hold[x$_ -> 2*x$ + 3]
> In[13]:=
> Clear[f];
> f[t_] := Hold[x_ -> 2*x]
> f[3]
> Out[15]=
> Hold[x_ -> 2*x]
> Now we turn to the two commands we started with. In
> Module[{x}, z /. x_->2 x],
> the expression that has to be evaluated within Rule only contains the
> named
> pattern. Therefore no $ sign is used; it is the variable Global`x. The
> value
> of that variable is not hidden by Module, so the right-hand side of
> the rule
> becomes 14:
> In[16]:=
> Module[{x}, z /. x_ -> (Information[x]; 2*x)]
> Global`x
> x = 7
> Out[16]=
> 14
> By the way, when we use Block instead of Module, the value of Global`x
> is
> hidden by Block and therefore it works as expected:
> In[17]:=
> Block[{x}, z /. x_ -> (Information[x]; 2*x)]
> Global`x
> Out[17]=
> 2*z
> In the second command
> Module[{x, y=1}, z /. x_->2 x y],
> the expression to be evaluated in the rule contains an extra variable
> y. So
> x becomes Global`x$ and that variable has no value:
> In[18]:=
> Module[{x, y = 1},
> z /. x_ -> (Information[x]; 2*x*y)]
> Global`x$
> Attributes[x$] = {Temporary}
> Out[18]=
> 2*z
> Hence personally I think this behaviour is not a bug.
> Fred Simons
> Eindhoven University of Technology
>
===
> Subject: named pattern variable scoped as
> global,
> should be local
>>> When using a named pattern variable within a module, it should be
>>> scoped locally within the pattern. However, this does not seem to
>>> work
>>> as advertised. For example, shouldn't the pattern variable x in the
>>> statements below be local to the pattern? Does anyone know whether
>>> this is a bug, or whether I am just missing something about the usage
>>> of variables in patterns/condition constructs?
>>> (* this returns 14! *)
>>> x = 7;
>>> Module[{x},{1, 2, 3} /. x_ -> 2x];
>>> (* this returns {2,4,6}, assuming q is not globally defined. *)
>>> Remove[q];
>>> Module[{q},{1, 2, 3} /. q_ -> 2q];
>>> Lee
>> To me this indeed looks like a bug, but Mathematica's scoping is
>> strange and it is sometimes hard to tell what is a bug and what is a
>> part of the design.
>> The reason why it looks to me like a bug is this:
>> x=7;
>> Module[{x,y=1},{1,2,3}/.x_->2 x y]
>> {2,4,6}
>> besides this workaround I can see two other choices:
>> 1. Use RuleDelayed:
>> x=7;
>> Module[{x},{1,2,3}/.x_:>2x]
>> {2,4,6}
>> 2. Use Block instead of Module:
>> Block[{x},{1,2,3}/.x_->2x]
>> {2,4,6}
>> Andrzej Kozlowski
>> Chiba, Japan
>> http://www.akikoz.net/andrzej/index.html
>> http://www.mimuw.edu.pl/~akoz/
===
Subject: Re: named pattern variable scoped as global, should be local
Andrzej,
I agree that it is at least very surprising that the following two commands
produce different results:
In[1]:=
x=7;
Module[{x}, z /. x_->2 x]
Module[{x, y=1}, z /. x_->2 x y]
Out[2]=
14
Out[3]=
2 z
Whether this has to be considered as a bug or not is a matter of taste.
After all, these commands are pretty artificial.
In the following I try to give a possible explanation of what is going on
here. I start with the observation that when in a scoping construct an
expression has to be evaluated that contains other variables than than
those
that will be scoped, a $-sign is appended to the scoped variables. A number
is added only when the evaluation takes place. Have a look at the following
example, with Module.
In[4]:=
Clear[f]; f[t_] = Hold[ Module[{x}, x+t]];
f[3]
ReleaseHold[%]
Out[5]=
Hold[Module[{x$},x$+3]]
Out[6]=
3+x$18
When the expression to be evaluated does not contain variables from outside
the scoping construct, the precaution of appending a $-sign does not take
place:
In[7]:=
Clear[f]; f[t_] = Hold[ Module[{x}, x]];
f[3]
ReleaseHold[%]
Out[8]=
Hold[Module[{x},x]]
Out[9]=
x$19
It works exactly the same way with Rule instead of Module:
In[10]:=
Clear[f];
f[t_] := Hold[x_ -> 2*x + t]
f[3]
Out[12]=
Hold[x$_ -> 2*x$ + 3]
In[13]:=
Clear[f];
f[t_] := Hold[x_ -> 2*x]
f[3]
Out[15]=
Hold[x_ -> 2*x]
Now we turn to the two commands we started with. In
Module[{x}, z /. x_->2 x],
the expression that has to be evaluated within Rule only contains the named
pattern. Therefore no $ sign is used; it is the variable Global`x. The
value
of that variable is not hidden by Module, so the right-hand side of the
rule
becomes 14:
In[16]:=
Module[{x}, z /. x_ -> (Information[x]; 2*x)]
Global`x
x = 7
Out[16]=
14
By the way, when we use Block instead of Module, the value of Global`x is
hidden by Block and therefore it works as expected:
In[17]:=
Block[{x}, z /. x_ -> (Information[x]; 2*x)]
Global`x
Out[17]=
2*z
In the second command
Module[{x, y=1}, z /. x_->2 x y],
the expression to be evaluated in the rule contains an extra variable y. So
x becomes Global`x$ and that variable has no value:
In[18]:=
Module[{x, y = 1},
z /. x_ -> (Information[x]; 2*x*y)]
Global`x$
Attributes[x$] = {Temporary}
Out[18]=
2*z
Hence personally I think this behaviour is not a bug.
Fred Simons
Eindhoven University of Technology
----- Original Message -----
===
Subject: named pattern variable scoped as global,
should be local
>> When using a named pattern variable within a module, it should be
>> scoped locally within the pattern. However, this does not seem to work
>> as advertised. For example, shouldn't the pattern variable x in the
>> statements below be local to the pattern? Does anyone know whether
>> this is a bug, or whether I am just missing something about the usage
>> of variables in patterns/condition constructs?
>> (* this returns 14! *)
>> x = 7;
>> Module[{x},{1, 2, 3} /. x_ -> 2x];
>> (* this returns {2,4,6}, assuming q is not globally defined. *)
>> Remove[q];
>> Module[{q},{1, 2, 3} /. q_ -> 2q];
>> Lee
> To me this indeed looks like a bug, but Mathematica's scoping is
> strange and it is sometimes hard to tell what is a bug and what is a
> part of the design.
> The reason why it looks to me like a bug is this:
> x=7;
> Module[{x,y=1},{1,2,3}/.x_->2 x y]
> {2,4,6}
> besides this workaround I can see two other choices:
> 1. Use RuleDelayed:
> x=7;
> Module[{x},{1,2,3}/.x_:>2x]
> {2,4,6}
> 2. Use Block instead of Module:
> Block[{x},{1,2,3}/.x_->2x]
> {2,4,6}
> Andrzej Kozlowski
> Chiba, Japan
> http://www.akikoz.net/andrzej/index.html
> http://www.mimuw.edu.pl/~akoz/
===
Subject: Re: named pattern variable scoped as global, should be local
-Lee
> When using a named pattern variable within a module, it should be
> scoped locally within the pattern. However, this does not seem to work
> as advertised. For example, shouldn't the pattern variable x in the
> statements below be local to the pattern? Does anyone know whether
> this is a bug, or whether I am just missing something about the usage
> of variables in patterns/condition constructs?
> (* this returns 14! *)
> x = 7;
> Module[{x},{1, 2, 3} /. x_ -> 2x];
> (* this returns {2,4,6}, assuming q is not globally defined. *)
> Remove[q];
> Module[{q},{1, 2, 3} /. q_ -> 2q];
> Lee
===
Subject: Re: Boundary conditions in NDSolve
> when I NDSolve a 2nd order DE, I cant seem to give two boundary
> conditions at different point of an interval I am solving it on:
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[-L]==B},f,{x,-L,L}]
> produces good result, but i.e.
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[L]==B},f,{x,-L,L}]
> ____here derivative is given at left edge, but the functi0n itself is
> given at the right edge___
> produces error:
> NDSolve::bvlin
> The differential equation(s) and/or boundary conditions are not linear
> in the dependent variables. NDSolve requires linearity to compute the
> solution of a multipoint boundary value problem.
> Is there a way around this?
> YZ
THanks Chris and Ramesh,
after I posted this, I did some more research and pretty much figured
out myself what you described. I should've done it before :-)
YZ
===
Subject: Re: Boundary conditions in NDSolve
The present version of NDSolve handles linear BVPs (Boundary Value
Problem) well. Your BVP is non-linear. NDSolve is excellent for
Initial Value Problems. It is very good at handling parabolic and
hyperbolic PDEsl. We can use these features to solve non-linear BVPs
with NDSolve. Here I mention two such methods.
The first is the well known Shooting method, where we guess one or
more of the initial conditions and iterate using a non-linear solver
to get the boundary conditions right.For your problem, guess the value
of f at -L, and find the deviation of the solution at L. The following
function does that
fAtOne[s_?NumericQ]:=
Module[{f},
sol=f/.NDSolve[{f''[x][Equal]K^2*Sin[f[x]],f'[-L][Equal]A,
f[-L][Equal]s},f,{x,-L,L}][[1]];
sol[L]-B]
Plot the function for different s to see where the roots are. It lets
us also know if there are multiple solutions. We can get a good guess
for FindRoot from the plot. For
A=-1; B=1; K=3; L=1
Plot[fAtOne[s],{s,-10,10}]
shows that there are atleast three solutions. The solutions can be
found easily as
Solution One
FindRoot[fAtOne[s],{s,-1}]
fAtOne[s]/.%
Plot[sol[x],{x,-L,L},PlotRange[Rule]All]
Solution Two
FindRoot[fAtOne[s],{s,0}]
fAtOne[s]/.%
Plot[sol[x],{x,-L,L},PlotRange[Rule]All]
Solution Three
FindRoot[fAtOne[s],{s,-4.1}]
fAtOne[s]/.%
p1=Plot[sol[x],{x,-L,L},PlotRange[Rule]All]
In the second method, the same BVP can be posed as a steady state
version of a PDE (parabolic PDE in this case). Here we solve a PDE.
The method is more involved, and is problem dependent. However if the
physics of the problem is well known, it is usually not difficult to
construct a PDE. In case of multiple solutions, the initial conditions
determine the final steady state. Here is a solution corresponding to
solution Three above.
sol1 = f /.
NDSolve[{-4*Sin[f[x, t]]*D[f[x, t],
t] + D[f[x, t], x, x] ==
K^2*Sin[f[x, t]],
(D[f[x, t], x] /. x -> -L) == A,
f[L, t] == B, f[x, 0] ==
A*(x - L) + B}, f, {x, -L, L},
{t, 0, 2}][[1]]
p2 = Plot[sol1[x, 2], {x, -L, L},
PlotRange -> All, PlotRange -> All]
It can be seen that either method gives the same solution.
Show[p1,p2]
Hope this helps,
Ramesh
> when I NDSolve a 2nd order DE, I cant seem to give two boundary
> conditions at different point of an interval I am solving it on:
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[-L]==B},f,{x,-L,L}]
> produces good result, but i.e.
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[L]==B},f,{x,-L,L}]
> ____here derivative is given at left edge, but the functi0n itself is
> given at the right edge___
> produces error:
> NDSolve::bvlin
> The differential equation(s) and/or boundary conditions are not linear
> in the dependent variables. NDSolve requires linearity to compute the
> solution of a multipoint boundary value problem.
> Is there a way around this?
> YZ
===
Subject: Re: Boundary conditions in NDSolve
Take a guess at the derivative on the left edge. Comput the function
value at the right edge. Guess a new value and recompute... After your
search is reasonably wide, you could hone down on the lhs derivatives
that give the rhs boundary condition.
I am having a similar problem, but with PDEs and more variables...
> when I NDSolve a 2nd order DE, I cant seem to give two boundary
> conditions at different point of an interval I am solving it on:
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[-L]==B},f,{x,-L,L}]
> produces good result, but i.e.
> NDSolve[{f''[x]==K^2*Sin[f[x]],f'[-L]==A, f[L]==B},f,{x,-L,L}]
> ____here derivative is given at left edge, but the functi0n itself is
> given at the right edge___
> produces error:
> NDSolve::bvlin
> The differential equation(s) and/or boundary conditions are not linear
> in the dependent variables. NDSolve requires linearity to compute the
> solution of a multipoint boundary value problem.
> Is there a way around this?
> YZ
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: FilledPlot: Curves->Back option and Epilog not working?
I have a module that makes a filled plot. I use it to make two (or
more) displaced filled plots, then Show these plots (code below).
Everything works as expected, with two problems:
* The Curves->Back option never works.
* An additional line generated by an Epilog in the module only appears
in the final test plot.
I'm beginning to realize that when multiple plots each having an Epilog
are combined using Show or DisplayTogether, only the *final* Epilog gets
executed. Throwing in Evaluates at various stages in the process
doesn't seem to get around this.
If this is true, maybe the description of Epilog in the online Help
should say this? (Especially since it seems an intuitively *non*obvious
way for Epilog to behave -- shouldn't the stuff created by Epilog in a
Plot command become part of the plot once the plot command has been
executed?)
I don't have a clue why Curves->Back doesn't work anywhere or any way
I've tried it (including changing Front to Back in the online Help
example).
wavePlot[] := Module[{},
,
FilledPlot[ ,
Curves -> Back,
Epilog -> {Line[ ]},
DisplayFunction -> Identity]];
;
testPlot1 = wavePlot[ ];
;
testPlot2 = wavePlot[ ];
Show[testPlot1, testPlot2,
DisplayFunction -> $DisplayFunction,
PlotRange -> All];
===
Subject: Re: Mathematica Notebook Organization
It occurred to me after posting a recent query that began
> I have a module that makes a filled plot. I use it to make a series
> of displaced filled plots, then Show these in a single graphic.
>
> Everything works as expected, with two problems:
>
> * The Curves->Back option never works.
>
> * An additional line generated by an Epilog in the module only
> appears in the final test plot of the group.
>
> (remainder snipped)
that this is a minor but near perfect example of just one of the many
kinds of hassles that arise if one attempts to program in or use
Mathematica both as numerical evaluator and graphics generator on the
one hand, and as a primary document preparation tool on the other.
Sure, Mathematica is a great tool for generating initial versions of
graphics, even (or especially) complex multi-element or multi-curve
graphics; and with some skill and effort you can create results good
enough for government work, e.g. good enough for a report or a web page
or class notes, or maybe a PhD dissertation -- though it often takes
considerable skill and effort and multiple retries to get results that
begin to look good.
Annotating, touching up, and polishing graphics to publication quality,
on the other hand, is a task that is by far best done, and much more
easily done even by unskilled hands, using a WYSIWYG, click, edit,
preview the results, and Undo if necessary tool such as, for example,
Illustrator. I should learn that I'm always better off to do the main
outlines of a graphic in Mathematica, then export the graphic as EPS, and do
the
final touch-up in Illustrator or some similar tool. Mathematica is a lousy,
frustrating tool for final graphics touch-up.
Of course I could then import the polished graphic back into the Mathematica
document -- but that destroys the interactivity which is the primary
reason for doing document preparation in the first place. Once the
graphic is out of Mathematica, it makes much more sense to keep it as an EPS
or
PDF file, which I can
* Import into my graphics database (iView, in my case) so I can easily
find it again any time I want it.
* Import into Tex or LaTeX documents (these _are_ genuinely good
document preparation tools).
* Use in PowerPoint or Acrobat/PDF slides or QuickTime files.
and so on.
Bottom line:
* Numerical and symbolic calculations and graphing of the associated
results require one set of capabilities, which are best carried out
using one kind of user interface, and which demand one quite large set
of capabilities, tools and syntax in the application that does them.
* Document preparation and presentation involves a whole different set
of capabilities, which are best carried out using quite different kinds
of user interfaces, and which demand a whole additional large set of
capabilities, tools and syntax.
* Trying to combine these quite different capabilities, tools, user
interfaces, and syntax into one single giant application -- or one
single giant language with one immense syntax -- does not really save
or simplify anything, it only makes things worse.
All the capabilities, tools, and syntax needed for all the different
tasks must still be present in the unified system (and learned by the
user). But in a unified system, the user interface becomes so complex
-- so many menus, so many commands in the one interface -- that it
becomes unusable (and unlearnable). Ditto the syntax.
With a unified system -- even if it's to some extent modular --
competition can no longer upgrade individual components or modules of
the system. But if Mathematica and Illustrator can share the task of
generating
a graphic, communicating with each only through the graphic itself, in
some widely used format like EPS, each tool can get better separately
and without conflict.
And for the user, learning what you need to know to do what you want to
do, is no more difficult -- indeed, it's easier -- if you learn and
implement part of the necessary toolkit in Mathematica, part in Illustrator.
The
total of what you need to learn is the same; combining these into one
massive language or system makes it harder, not easier.
There are other important aspects, quite outside of graphics, where
attempts to combine content creation (analysis and calculation) and
document preparation and presentation in one single language or syntax
are equally damaging; but I've probably ranted more than enough in this
message already.
--AES
===
Subject: Re: RC circuit
To insert values for data at the last minute (even in the Plot legend),
try this:
Block[{Plot, ToString, StringJoin},
Plot[v@t, {t, 0, 50}, Frame -> True,
FrameLabel -> {t seconds, v volts},
PlotLabel -> Voltage in an RC Circuit,
Epilog -> {Text[c = <> ToString@c <> Farads, {30, 4}, {-1, 0}],
Text[i = <> ToString@i <> Amperes, {30, 3.5}, {-1, 0}],
Text[r = <> ToString@r <> Ohms, {30, 3.0}, {-1, 0}]},
ImageSize -> 450] /. data
];
That eliminates the need for Evaluate, too.
Bobby
> Here is one method to solve your exercise. You can use 'i' for current
but
> not 'I', which is a reserved symbol. In general you should avoid symbols
> that start with capital letters. Also notice that 'equal' is represented
by
> == in Mathematica.
> Here is your equation.
> Clear[v];
> deqn = i == v[t]/r + c v'[t];
> Now we solve the equation and add the initial condition that v starts at
0
> when t == 0.
> Clear[v];
> dsolutions = DSolve[{deqn, v[0] == 0}, v, t]
> v[t_] = v[t] /. Part[dsolutions, 1, 1]
> {{v -> Function[{t}, ((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))]}}
> ((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))
> Now we need a set of data for some particular case to plot.
> (data = {c -> 3, i -> 2, r -> 2.5}) // TableForm
> Now we can plot, We want to substitute the data into the expression for
v[t]
> and we want to evaluate the expression in the Plot statement. I've added
> labels and text to make a more informative plot.
> Plot[Evaluate[v[t] /. data], {t, 0, 50},
> Frame -> True,
> FrameLabel -> {t seconds, v volts},
> PlotLabel -> Voltage in an RC Circuit,
> Epilog -> {Text[c = 3 Farads, {30, 4}, {-1, 0}],
> Text[i = 2 Amperes, {30, 3.5}, {-1, 0}],
> Text[r = 2.5 Ohms, {30, 3.0}, {-1, 0}]},
> ImageSize -> 450];
> It does take a little time to learn the syntax and available commands in
> Mathematica before one can efficiently use it to solve practical problems
> and homework exercises. But it is worth it if you expect to do much of
it.
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
> Two ?'s I am trying to solve a simple RC circuit. ?1 I can's use the
> letter i for current, is there a way around this ?
> ?2. How do I solve this equation for dv/dt, eq: i = v/r + C dv/dt
--
DrBob@bigfoot.com
===
Subject: Re: RC circuit
Your question is like asking for the force in a spring and damper
system without giving any initial (or boundary) velocities or
displacements...
Could you be a little more specific about the initial conditions and
whether there is a net external voltage applied?
A diagram of the circuit would be nice, if you don't mind.
> Two ?'s I am trying to solve a simple RC circuit. ?1 I can's use the
> letter i for current, is there a way around this ?
> ?2. How do I solve this equation for dv/dt, eq: i = v/r + C dv/dt
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: RC circuit
Here is one method to solve your exercise. You can use 'i' for current but
not 'I', which is a reserved symbol. In general you should avoid symbols
that start with capital letters. Also notice that 'equal' is represented by
== in Mathematica.
Here is your equation.
Clear[v];
deqn = i == v[t]/r + c v'[t];
Now we solve the equation and add the initial condition that v starts at 0
when t == 0.
Clear[v];
dsolutions = DSolve[{deqn, v[0] == 0}, v, t]
v[t_] = v[t] /. Part[dsolutions, 1, 1]
{{v -> Function[{t}, ((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))]}}
((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))
Now we need a set of data for some particular case to plot.
(data = {c -> 3, i -> 2, r -> 2.5}) // TableForm
Now we can plot, We want to substitute the data into the expression for
v[t]
and we want to evaluate the expression in the Plot statement. I've added
labels and text to make a more informative plot.
Plot[Evaluate[v[t] /. data], {t, 0, 50},
Frame -> True,
FrameLabel -> {t seconds, v volts},
PlotLabel -> Voltage in an RC Circuit,
Epilog -> {Text[c = 3 Farads, {30, 4}, {-1, 0}],
Text[i = 2 Amperes, {30, 3.5}, {-1, 0}],
Text[r = 2.5 Ohms, {30, 3.0}, {-1, 0}]},
ImageSize -> 450];
It does take a little time to learn the syntax and available commands in
Mathematica before one can efficiently use it to solve practical problems
and homework exercises. But it is worth it if you expect to do much of it.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
[mailto:pennsylvaniajake@netscape.net]
Two ?'s I am trying to solve a simple RC circuit. ?1 I can's use the
letter i for current, is there a way around this ?
?2. How do I solve this equation for dv/dt, eq: i = v/r + C dv/dt
===
Subject: Re: RC circuit
Interesting - and better.
To insert values for data at the last minute (even in the Plot legend),
try this:
Block[{Plot, ToString, StringJoin},
Plot[v@t, {t, 0, 50}, Frame -> True,
FrameLabel -> {t seconds, v volts},
PlotLabel -> Voltage in an RC Circuit,
Epilog -> {Text[c = <> ToString@c <> Farads, {30, 4}, {-1, 0}],
Text[i = <> ToString@i <> Amperes, {30, 3.5}, {-1, 0}],
Text[r = <> ToString@r <> Ohms, {30, 3.0}, {-1, 0}]},
ImageSize -> 450] /. data
];
That eliminates the need for Evaluate, too.
Bobby
On Sat, 7 May 2005 15:35:07 -0400 (EDT), David Park
> Here is one method to solve your exercise. You can use 'i' for current
but
> not 'I', which is a reserved symbol. In general you should avoid symbols
> that start with capital letters. Also notice that 'equal' is represented
by
> == in Mathematica.
> Here is your equation.
> Clear[v];
> deqn = i == v[t]/r + c v'[t];
> Now we solve the equation and add the initial condition that v starts at
0
> when t == 0.
> Clear[v];
> dsolutions = DSolve[{deqn, v[0] == 0}, v, t]
> v[t_] = v[t] /. Part[dsolutions, 1, 1]
> {{v -> Function[{t}, ((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))]}}
> ((-1 + E^(t/(c*r)))*i*r)/E^(t/(c*r))
> Now we need a set of data for some particular case to plot.
> (data = {c -> 3, i -> 2, r -> 2.5}) // TableForm
> Now we can plot, We want to substitute the data into the expression for
v[t]
> and we want to evaluate the expression in the Plot statement. I've added
> labels and text to make a more informative plot.
> Plot[Evaluate[v[t] /. data], {t, 0, 50},
> Frame -> True,
> FrameLabel -> {t seconds, v volts},
> PlotLabel -> Voltage in an RC Circuit,
> Epilog -> {Text[c = 3 Farads, {30, 4}, {-1, 0}],
> Text[i = 2 Amperes, {30, 3.5}, {-1, 0}],
> Text[r = 2.5 Ohms, {30, 3.0}, {-1, 0}]},
> ImageSize -> 450];
> It does take a little time to learn the syntax and available commands in
> Mathematica before one can efficiently use it to solve practical problems
> and homework exercises. But it is worth it if you expect to do much of
it.
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
> Two ?'s I am trying to solve a simple RC circuit. ?1 I can's use the
> letter i for current, is there a way around this ?
> ?2. How do I solve this equation for dv/dt, eq: i = v/r + C dv/dt
--
DrBob@bigfoot.com
===
Subject: Simplifying Log to ArcCos Expressions
I want to integrate the following expression and get a simple answer.
expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
The answer is actually quite simple: ArcCos[k/r] + constant. But what a lot
of work for me to get it! Perhaps someone can show a simpler path. (I'm
working with Version 5.0.1.)
expr2 = Integrate[expr1, r]
-((Sqrt[k^2 - r^2]*Log[(2*(k + Sqrt[k^2 - r^2]))/
r])/(Sqrt[1 - k^2/r^2]*r))
Then I have to do all the following simplification steps...
expr2[[{2, 3, 4}]]
Numerator[%]/(Denominator[%] /. Sqrt[a_]*(b_) :>
Sqrt[Distribute[a*b^2]])
% /. (a_)^(1/2)/(b_)^2^(-1) -> (a/b)^(1/2)
Simplify[%, r >= k]
Expand[%*FunctionExpand[expr2[[{1, 5}]]]]
expr3 = %[[2]]
expr3
MapAt[Distribute, %, {{2, 1}}]
% /. Sqrt[a_]/(b_) :> Sqrt[Distribute[a/b^2]]
% /. r -> k/z
% /. Log[(z_) + Sqrt[(z_)^2 - 1]] -> I*ArcCos[z]
% /. z -> k/r
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
===
Subject: Re: Simplifying Log to ArcCos Expressions
zeqn=z[Equal]k/r
expr1=(k/r^2)*(1/Sqrt[1-k^2/r^2])
FullSimplify@
Assuming[{r>0},
Fold[MapAll@@Reverse@{##}&,
Integrate[expr1,r]/.Solve[zeqn,k][[1]],{Factor,Refine,Cancel}]]
%/.Solve[zeqn,z][[1]]
The complex part may be grouped into the constant.
> I want to integrate the following expression and get a simple answer.
> expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
> The answer is actually quite simple: ArcCos[k/r] + constant. But what a
lot of work for me to get it! Perhaps someone can show a simpler path. (I'm
working with Version 5.0.1.)
> expr2 = Integrate[expr1, r]
> -((Sqrt[k^2 - r^2]*Log[(2*(k + Sqrt[k^2 - r^2]))/
> r])/(Sqrt[1 - k^2/r^2]*r))
> Then I have to do all the following simplification steps...
> expr2[[{2, 3, 4}]]
> Numerator[%]/(Denominator[%] /. Sqrt[a_]*(b_) :>
> Sqrt[Distribute[a*b^2]])
> % /. (a_)^(1/2)/(b_)^2^(-1) -> (a/b)^(1/2)
> Simplify[%, r >= k]
> Expand[%*FunctionExpand[expr2[[{1, 5}]]]]
> expr3 = %[[2]]
> expr3
> MapAt[Distribute, %, {{2, 1}}]
> % /. Sqrt[a_]/(b_) :> Sqrt[Distribute[a/b^2]]
> % /. r -> k/z
> % /. Log[(z_) + Sqrt[(z_)^2 - 1]] -> I*ArcCos[z]
> % /. z -> k/r
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: Simplifying Log to ArcCos Expressions
How about substituting k/r == Cos[t] BEFORE integrating?
expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
D[k/Cos@t, t]expr1 /. r -> k/Cos@t // Simplify // PowerExpand
Integrate[%, t] + C
% /. t -> ArcCos[k/r]
k/(Sqrt[1 - k^2/r^2]*r^2)
1
C + t
C + ArcCos[k/r]
Sin[t] would work equally well.
Bobby
> I want to integrate the following expression and get a simple answer.
> expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
> The answer is actually quite simple: ArcCos[k/r] + constant. But what a
lot of work for me to get it! Perhaps someone can show a simpler path. (I'm
working with Version 5.0.1.)
> expr2 = Integrate[expr1, r]
> -((Sqrt[k^2 - r^2]*Log[(2*(k + Sqrt[k^2 - r^2]))/
> r])/(Sqrt[1 - k^2/r^2]*r))
> Then I have to do all the following simplification steps...
> expr2[[{2, 3, 4}]]
> Numerator[%]/(Denominator[%] /. Sqrt[a_]*(b_) :>
> Sqrt[Distribute[a*b^2]])
> % /. (a_)^(1/2)/(b_)^2^(-1) -> (a/b)^(1/2)
> Simplify[%, r >= k]
> Expand[%*FunctionExpand[expr2[[{1, 5}]]]]
> expr3 = %[[2]]
> expr3
> MapAt[Distribute, %, {{2, 1}}]
> % /. Sqrt[a_]/(b_) :> Sqrt[Distribute[a/b^2]]
> % /. r -> k/z
> % /. Log[(z_) + Sqrt[(z_)^2 - 1]] -> I*ArcCos[z]
> % /. z -> k/r
> David Park
> djmp@earthlink.net
> http://home.earthlink.net/~djmp/
--
DrBob@bigfoot.com
===
Subject: Re: Simplifying Log to ArcCos Expressions
>I want to integrate the following expression and get a simple answer.
>expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
>The answer is actually quite simple: ArcCos[k/r] + constant. But what a
lot
of work for me to get it! Perhaps someone can show a simpler path.
If I'm not mistaken, this is easily done by substitution.
http://www.math.hmc.edu/calculus/tutorials/trig_substitution/
Simplification should be done before calling Integrate, then.
cheers,
Peltio
Sometimes, the best way to use Mathematica is to not use it at all. : ]
--
Invalid address in reply-to. Crafty deminging required to mail me.
===
Subject: Re: Simplifying Log to ArcCos Expressions
Hi David,
I have just tried to solve the integral you posted
and on Math.4 it has given to me:
In[1]:= expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2]);
expr2 = Integrate[expr1, r]
Out[1]:= -ArcSin[k / r]
I don't know your problem about this on Math.5.
~Scout~
> I want to integrate the following expression and get a simple answer.
> expr1 = (k/r^2)*(1/Sqrt[1 - k^2/r^2])
> The answer is actually quite simple: ArcCos[k/r] + constant. But what a
> lot of work for me to get it! Perhaps someone can show a simpler path.
> (I'm working with Version 5.0.1.)
> expr2 = Integrate[expr1, r]
> -((Sqrt[k^2 - r^2]*Log[(2*(k + Sqrt[k^2 - r^2]))/
> r])/(Sqrt[1 - k^2/r^2]*r))
> Then I have to do all the following simplification steps...
> expr2[[{2, 3, 4}]]
> Numerator[%]/(Denominator[%] /. Sqrt[a_]*(b_) :>
> Sqrt[Distribute[a*b^2]])
> % /. (a_)^(1/2)/(b_)^2^(-1) -> (a/b)^(1/2)
> Simplify[%, r >= k]
> Expand[%*FunctionExpand[expr2[[{1, 5}]]]]
> expr3 = %[[2]]
> expr3
> MapAt[Distribute, %, {{2, 1}}]
> % /. Sqrt[a_]/(b_) :> Sqrt[Distribute[a/b^2]]
> % /. r -> k/z
> % /. Log[(z_) + Sqrt[(z_)^2 - 1]] -> I*ArcCos[z]
> % /. z -> k/r
===
Subject: Calling a MS-DOS command
Hi everyone,
I would like to call some DOS commands from Mathematica like
delete (tmp.txt)
copy filea to fileb
Is this possible in Mathematica? I have been looking at the
documentation but am not having a lot of luck.
===
Subject: Re: Calling a MS-DOS command
Swati Shah ha scritto:
> Hi everyone,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
You can use the Run[] command:
Run[expr_1,expr_2,... ] generates the printed form of the expressions
expr_i, separated by spaces, and runs it as an external, operating
system, command.
Note that
!command is the same of
Run[command]
As an example, under windows the input
!cmd
starts the MS-DOS console. In this case after the ms-dos console is
started the front end still looks like evaluating, because it's waiting
for the end of the external command. When you close the ms-dos console,
mathematica will know that the execution of the external, system code is
ended and you'll be able to do new computation in the front end.
If you evaluate
!cmd
1+1
you'll see..
bye,
OT
===
Subject: Re: Calling a MS-DOS command
> Hi everyone,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
Run will do this - try Run[pause] to see more clearly what happens.
BTW, they are not really MS-DOS commands - MS-DOS died long ago, they
are CMD commands.
David Bailey
http://www.dbaileyconsultancy.co.uk
===
Subject: Re: Re: letrec/named let
> Seems more likely that Ordering was broken in 5.1.1.
I suppose it does, at that.
Bobby
> *This message was transferred with a trial version of CommuniGate(tm)
Pro*
>> Interesting. These results seem very dependent on platform.
>> Specifically, when I run the code above I get.
> Seems more likely that Ordering was broken in 5.1.1.
> Andrzej Kozlowski
--
DrBob@bigfoot.com
===
Subject: Re: Re: letrec/named let
> Interesting. These results seem very dependent on platform.
> Specifically, when I run the code above I get.
Seems more likely that Ordering was broken in 5.1.1.
Andrzej Kozlowski
===
Subject: Re: Re: letrec/named let
So the latest version of Ordering has a large performance bug on Windows,
but not Mac.
Could someone test it with Linux?
Bobby
>> Indeed, for real data, your substitute for ordering is much faster
>> (in 5.1.1) than the built-in. If it's 2.5 times SLOWER on your
>> machine, that does seem to imply WRI has radically DECREASED
>> Ordering's performance from 4.1 to 5.1.1.
>> It's hard to believe they'd do that, but apparently they have.
>>> The only possibility I can think of is that ord=Ordering[data]
>>> returns a packed array of integers when data is integer, while
>>> ord=Ordering[data] returns an unpacked array of integers when data
>>> is real.
>> Exactly right. See below. Maybe this explains the performance issue
>> for Ordering itself, too.
>> (Note: 'data' is packed in both tests.)
>> pq = Developer`PackedArrayQ; carlTimed[s_] := Module[{ord, t, o,
>> ans},
>> Print@Timing[ord = Ordering@s; Ordering]; Print@Timing[t =
>> FoldList[ Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>> FoldList]; Print@Timing[o = Ordering@ord; Ordering];
>> Print@Timing[ans = t[[o]]; Part]; Print[pq /@ {ord, t, o, ans}];
>> ans]
>> ordering[x_List] := Round@Sort[Transpose[{x,
>> N@Range@Length@x}]][[All, 2]] carlNewOrder[s_] := Module[{ord, t,
>> o, ans},
>> Print@Timing[ord = ordering@s; ordering]; Print@Timing[ t =
>> FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>> FoldList]; Print@Timing[o = ordering@ord; ordering];
>> Print@Timing[ans = t[[o]]; Part]; Print[pq /@ {ord, t, o, ans}];
>> ans]
>> data = Table[Random[], {10^6}]; Timing[carlTimed@data; Total]
>> Timing[carlNewOrder@data; Total]
>> {8. Second,Ordering} {0.391 Second,FoldList} {7.25 Second,Ordering}
>> {0.063 Second,Part} {False,True,True,True} {15.891 Second,Total}
>> {1.063 Second,ordering} {0.343 Second,FoldList} {7.36
>> Second,ordering} {0.062 Second,Part} {True,True,True,True} {8.828
>> Second,Total}
>> Notice ordering returned a packed array (ord) for real data, but
>> Ordering didn't. Also notice the second use (with Integer data) has
>> Ordering and ordering equally fast, but that's despite ordering
>> being applied to a packed integer array, but Ordering applied to an
>> unpacked array.
> Interesting. These results seem very dependent on platform. Specifically,
when I run the code above I get.
> In[5]:=
> data = Table[Random[], {10^6}];
> Timing[carlTimed[data]; Total]
> Timing[carlNewOrder[data]; Total]
> {1.478287*Second, Ordering}
> {1.596432*Second, FoldList}
> {1.215809*Second, Ordering}
> {0.262963*Second, Part}
> {True, True, True, True}
> {4.568661*Second, Total}
> {3.336485*Second, ordering}
> {1.603586*Second, FoldList}
> {18.20541*Second, ordering}
> {0.266754*Second, Part}
> {True, True, True, True}
> {23.426852*Second, Total}
> In[8]:=
> $Version
> Out[8]=
> 5.1 for Mac OS X (January 27, 2005)
> As you can see on my machine, both Ordering and ordering return packed
arrays. And when I compare timings of Carl's solution to yours I get results
consistent with what Carl reported, i.e., his solution runs about 5 times
faster.
> --
> To reply via email subtract one hundred and four
--
DrBob@bigfoot.com
===
Subject: Re: letrec/named let
>Indeed, for real data, your substitute for ordering is much faster
>(in 5.1.1) than the built-in. If it's 2.5 times SLOWER on your
>machine, that does seem to imply WRI has radically DECREASED
>Ordering's performance from 4.1 to 5.1.1.
>It's hard to believe they'd do that, but apparently they have.
>>The only possibility I can think of is that ord=Ordering[data]
>>returns a packed array of integers when data is integer, while
>>ord=Ordering[data] returns an unpacked array of integers when data
>>is real.
>Exactly right. See below. Maybe this explains the performance issue
>for Ordering itself, too.
>(Note: 'data' is packed in both tests.)
>pq = Developer`PackedArrayQ; carlTimed[s_] := Module[{ord, t, o,
>ans},
>Print@Timing[ord = Ordering@s; Ordering]; Print@Timing[t =
>FoldList[ Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>FoldList]; Print@Timing[o = Ordering@ord; Ordering];
>Print@Timing[ans = t[[o]]; Part]; Print[pq /@ {ord, t, o, ans}];
>ans]
>ordering[x_List] := Round@Sort[Transpose[{x,
>N@Range@Length@x}]][[All, 2]] carlNewOrder[s_] := Module[{ord, t,
>o, ans},
>Print@Timing[ord = ordering@s; ordering]; Print@Timing[ t =
>FoldList[Plus, 1, Sign[Abs[ListCorrelate[{1, -1}, s[[ord]]]]]];
>FoldList]; Print@Timing[o = ordering@ord; ordering];
>Print@Timing[ans = t[[o]]; Part]; Print[pq /@ {ord, t, o, ans}];
>ans]
>data = Table[Random[], {10^6}]; Timing[carlTimed@data; Total]
>Timing[carlNewOrder@data; Total]
>{8. Second,Ordering} {0.391 Second,FoldList} {7.25 Second,Ordering}
>{0.063 Second,Part} {False,True,True,True} {15.891 Second,Total}
>{1.063 Second,ordering} {0.343 Second,FoldList} {7.36
>Second,ordering} {0.062 Second,Part} {True,True,True,True} {8.828
>Second,Total}
>Notice ordering returned a packed array (ord) for real data, but
>Ordering didn't. Also notice the second use (with Integer data) has
>Ordering and ordering equally fast, but that's despite ordering
>being applied to a packed integer array, but Ordering applied to an
>unpacked array.
Interesting. These results seem very dependent on platform. Specifically,
when I run the code above I get.
In[5]:=
data = Table[Random[], {10^6}];
Timing[carlTimed[data]; Total]
Timing[carlNewOrder[data]; Total]
{1.478287*Second, Ordering}
{1.596432*Second, FoldList}
{1.215809*Second, Ordering}
{0.262963*Second, Part}
{True, True, True, True}
{4.568661*Second, Total}
{3.336485*Second, ordering}
{1.603586*Second, FoldList}
{18.20541*Second, ordering}
{0.266754*Second, Part}
{True, True, True, True}
{23.426852*Second, Total}
In[8]:=
$Version
Out[8]=
5.1 for Mac OS X (January 27, 2005)
As you can see on my machine, both Ordering and ordering return packed
arrays. And when I compare timings of Carl's solution to yours I get results
consistent with what Carl reported, i.e., his solution runs about 5 times
faster.
--
To reply via email subtract one hundred and four
===
Subject: Re: Re: letrec/named let
>> Seems more likely that Ordering was broken in 5.1.1.
>I suppose it does, at that.
--
To reply via email subtract one hundred and four
===
Subject: Hexagonal Spiral
Hi
the purpose of this message is to draw a hexagonal spiral and then
divide every edge to the suitable numbers of points, then map the
prime numbers to the all points of the spiral.
this is motivated by the figure:
http://www.cut-the-knot.org/ctk/HexMosaic.gif
this project consist of three parts: first: draw a hexagonal spiral by
using the hexagon function from mathworld HexagonalGrid.nb notebook by
making multiple hexagons every one bigger than the previous by one
unit, the fifth edge of every hexagon is extended by one unit to allow
the beginning of a new bigger hexagon in a spiral way.
second: we need to divide the edges of every hexagon to pieces
according to its position from the center, except the fifth edge, this
dividing will use the straigth Line equation to determine the
coordinates of every point inside the edge.
third: map the prime numbers over the points wich constitute the
hexagonal spiral.
you can download the notebook from:
http://sr2.mytempdir.com/25353
the program may seem messy and convoluted but it may be usefull for
studying the straight line equation, or other fun something.
welcome to any critic, suggestions, ideas, improvements.
(* The hexagon generator function is from mathworld in the notebook
HexagonalGrid.nb with a small variation*)
x = 0; dsp = 1;(*dsp is for drawing consecutive bigger hexagons*)
Table[p[i] = x + # & /@ (dsp*(Through[{Cos, Sin}[Pi#/3]] &) /@ Range[0,
5]);
dsp++;
(* The fifth edge of every hexagon we want to extend it horizontaly by 1
unit so we will be able to begin the new bigger hexagon in a spiral form *)
p[i] = ReplacePart[p[i], Last[p[i]] + {1, 0}, Length[p[i]]];, {i,(*
number of hexagons possible *)100}]; (* End of Table function *)
i = 1; ww = {}; While[i <= 20 (* number of hexagons desired *),
ww = Join[ww, p[i]]; i++] (*
The coordinates of the vertices of consecutive Hexagons will be in ww
variable*)
(* the Hexagonal spiral *)
m1 = Graphics[{Line[ww], {Red, PointSize[0.02], Point[{0, 0}]}},
AspectRatio -> Automatic];
Show[m1]
(* The straigth Line Equation function *)
f[{x1_, y1_}, {x2_, y2_}] :=
y = x*((y2 - y1)/(x2 - x1)) + y1 - x1*((y2 - y1)/(x2 - x1))
w = Partition[ww, 6];
w2 = {}; i = 1; While[i <= Length[w] - 1,
w2 = Join[w2, {Append[w[[i]], First[w[[i + 1]]]]}]; i++;]
(* the following will divide every edge into equal units according to its
distance from the center of the hexagon with the exception of the horizontal
fifth edge wich is taller than the others by one unit *)
n = 1; polylist = {}; polynum = 19 (* number of hexagons *); lst = {}; s =
1;
i = 1; j = 1;
While[
s <= polynum,
j = 1;
While[j <= 6, lst = {};
x1 = w2[[s]][[j]][[1]];
y1 = w2[[s]][[j]][[2]];
x2 = w2[[s]][[j + 1]][[1]];
y2 = w2[[s]][[j + 1]][[2]];
If[j == 5, ss = s + 1, ss = s];
i = 1; n = 1;
While[i <= s,
x = x1 + n*(x2 - x1)/ss;
(*call the straight line equation function : *)
y = f[{x1, y1}, {x2, y2}];
lst = Join[lst, {{x, y}}];
n++; i++];
If[j == 1, lst = Prepend[lst, {x1, y1}]];
If[j == 5, lst = Append[lst, {x2, y2}]];
polylist = Join[polylist, lst];
j++];
s++]
(* To delete one of every consecutive similar points which represent the
beginning point of every hexagon points *)
i = 1; While[i <= Length[polylist] - 2,
If[polylist[[i]] == polylist[[i + 1]], polylist = Delete[polylist, i]];
i++]
v = {}; i = 1;
While[i <= Length[polylist],
If[PrimeQ[i] == True, v = Join[v, {polylist[[i]]}]]; i++]
Show[Graphics[{Line[Take[ww, 50]], {PointSize[.02], Point[{0, 0}], Red,
Point /@ Take[polylist, 234]}}], AspectRatio -> Automatic]
hexx = Graphics[{PointSize[.02], Point[{0, 0}], Green, Point /@ polylist}];
prm = Graphics[{PointSize[.02], Red, Point /@ v}];
lin = Graphics[{Line[ww], {PointSize[.03], Point[{0, 0}]}}];
(* show prime numbers in Red : *)
Show[hexx, prm, AspectRatio -> Automatic]
Show[lin, hexx, prm, AspectRatio -> Automatic]
===
Subject: Re: Ordering broken on Windows, but not Mac
This would be the first ever example of this sort of thing in my
experience. There used to be a visible difference between the behaviour
of Split on the Mac and other platforms (Carl and I were involved in
discovering this phenomenon that was never really explained) but we
attributed this to differences at operating system level, probably
memory management or maybe even hardware. But this case would be
differencing the issue is unpacking of packed arrays, something that is
obviously in the Mathematica code. I have always believed that the
basic Kernel code is written in C++ and then compiled for various
platforms on which Mathematica runs which would mean that such a thing
could not happen, but perhaps this is no longer the case (if it ever
was the case).
Andrzej
> Ah. Here's what I get for the version at my machine:
> $Version
> 5.1 for Microsoft Windows (January 27, 2005)
> Same date as you, Bill.
> There's no reason Ordering can't be broken on Windows but not Mac, is
> there? It definitely behaves differently with respect to packing its
> results.
> Bobby
> On Sat, 7 May 2005 21:39:53 -0700, Bill Rowe
>>>> Seems more likely that Ordering was broken in 5.1.1.
>>> I suppose it does, at that.
>> version 5.1.1. Or at least what was purported to be version 5.1.1. I
>> did note at the time I downloaded this version and installed it, the
>> string returned by $Version shows the version to be 5.1 with a more
>> recent date than before.
>> --
>> To reply via email subtract one hundred and four
> --
> DrBob@bigfoot.com
===
Subject: Folding Deltas
Can anyone help to verify in Mathematica the expression given by Rota
(http://xoomer.virgilio.it/maurocer/Text07.htm):
Convolution ( Sum of DiracDeltaFct ** Sum of DiracDeltaFct) == Sum
(DiracDeltaFct + Values).
I tied
Integrate[DiracDelta[t] DiracDelta[t - 2] , {t, -3, 3} ]
which does not evaluate;
but
Integrate[DiracDelta[t] DiracDelta[t - 2] , {t, -3, 1} ] +
Integrate[DiracDelta[t] DiracDelta[t - 2] , {t, 1, 3} ] == 0
True
( I use ver 5.1 with W2k)
===
Subject: Re: Calling a MS-DOS command
In the help system under 'System interface':'File system' you will find:
DeleteFile
CopyFile.
You can also find in 'System interface':'External Commands' the function
Run
-----Original Message-----
===
Subject: Calling a MS-DOS command
Hi everyone,
I would like to call some DOS commands from Mathematica like
delete (tmp.txt)
copy filea to fileb
Is this possible in Mathematica? I have been looking at the
documentation but am not having a lot of luck.
===
Subject: ArcTan[1/0] no result, but ArcTan[Infinity] ok. How to resolve?
hi;
Mathematica 5.1, on windows.
ArcTan[1/0] gives an error but
ArcTan[Infinity] gives the correct answer.
One way to make ArcTan[1/0] give Pi/2 is to
write it as ArcTan[0,1].
I do know that 1/0 is DirectedInfinity[] with
unknown direction while Infinity is
DirectedInfinity[1], and that is probably the
reason that ArcTan[1/0] gives an error
but ArcTan[Infinity] does not.
I am asking is how to make 1/0 result in DirectedInfinity[1]
to avoid the error? is this possible?
What function do I need to wrap 1/0 with to
cause it to become Infinity[1] instead of
Infinity[] ? or may be I need to figure how
to detect if a division results in Infinity[]
and convert that to Infinity[1]? do I need
to redfine 1/0 somehow? may be make a new
rule to say if Mathematica see 1/0 expression then
make it Infinity[1]? but may be this will screw
other things?
Or may I should not mess with this stuff and
just change the code to ArcTan[x,y] instead of
ArcTan[y/x] and be happy?
Steve
===
Subject: Re: How to quickly find number of non-zero elements in sparse
matrix rows?
>I have a sparse matrix, roughly 200k by 200k, with about .01% of the
>entries non zero, represented with SparseArray. I'd like to reasonably
>efficiently generate a 200k-long list where each element is the number of
>non-zero entries in the corresponding row of the matrix. I haven't been
>able to figure out a quick way to do this.
>One approach I've tried is the following (using a made up SparseArray of
>an identity matrix to illustrate the point):
>In[76]:= sa = SparseArray[Table[{i,i}[Rule]1, {i, 200000}]];
>In[77]:= rowLen[sa_, r_] := Length[ArrayRules[Take[sa, {r}]]]-1
>However, it's quite slow--about 1/10 of a second for each value computed
>(on a 1GHz Mac G4)
>In[80]:= Table[rowLen[sa,i], {i,100}] // Timing
>Out[80]=
>{12.4165
Second,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
>
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
>I've got to assume that there's an efficient way to do this--does anyone
>have any suggestions?
>-matt
Hi
You may have already try this approach, although quite naive, but seems
to work for 2000X 2000 matrix on my
Mathematica 5.1.1, Celeron, XP
Of course it gives up for 20000
Clear[sa, sas, sass]
Off[General::spell1]
sa = SparseArray[Table[{i, i} -> 1, {i, 2000}]] // Normal // Flatten;
sas = DeleteCases[sa, 0];
l = Length[sas]
--
Pratik Desai
Graduate Student
UMBC
Department of Mechanical Engineering
Phone: 410 455 8134
===
Subject: Re: How to quickly find number of non-zero elements in sparse
matrix rows?
Matt,
Try this:
<0];
populationlist=(Range@Length@sa)/.frules
> I have a sparse matrix, roughly 200k by 200k, with about .01% of the
> entries non zero, represented with SparseArray. I'd like to reasonably
> efficiently generate a 200k-long list where each element is the number of
> non-zero entries in the corresponding row of the matrix. I haven't been
> able to figure out a quick way to do this.
> One approach I've tried is the following (using a made up SparseArray of
> an identity matrix to illustrate the point):
> In[76]:= sa = SparseArray[Table[{i,i}[Rule]1, {i, 200000}]];
> In[77]:= rowLen[sa_, r_] := Length[ArrayRules[Take[sa, {r}]]]-1
> However, it's quite slow--about 1/10 of a second for each value computed
> (on a 1GHz Mac G4)
> In[80]:= Table[rowLen[sa,i], {i,100}] // Timing
> Out[80]=
> {12.4165
Second,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
>
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
> 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
> I've got to assume that there's an efficient way to do this--does anyone
> have any suggestions?
> -matt
> --
> Matt Pharr matt@pharr.org
> In a cruel and evil world, being cynical can allow you to get some
> entertainment out of it. --Daniel Waters
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: How to quickly find number of non-zero elements in sparse
matrix rows?
> The 4-th part of the data structure records the cumulative sum of
> non-zero entries. However, there seems to be no simple way to access
> this information. Usually, for special formats you can use Part to
> extract such information but Part is interpreted by SparseArray,
> circumventing this.
Of course one can use pattern-matching! For a sparse array,
sa=SparseArray[{i_,i_} -> 1,{10,10}]
extract the cumulative sum of non-zero entries,
sa /. SparseArray[_, _, _, x_] :> x[[2,1]]
and use ListConvolve to give the number of non-zero entries in each row.
ListConvolve[{1, -1}, %]
I think that this is about as fast as you can hope for.
Paul
--
Paul Abbott Phone: +61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul
http://InternationalMathematicaSymposium.org/IMS2005/
===
Subject: Re: FilledPlot: Curves->Back option and Epilog not working?
That's one of the problems with DisplayTogether or Show. It only picks up
an
option (like Epilog) once from its first appearance. As for the Curves
option, it only matters if you are plotting more than one fill in a
FilledPlot statement. (You didn't actually give us working code so I don't
know exactly what you were doing. Were you plotting more than one curve and
fill with a single call of WavePlot?)
With DrawGraphics you have none of these problems.
Needs[DrawGraphics`DrawingMaster`]
waveDraw[ampl_, phase_, fillcolor_, mesg_, mesgloc_] :=
{FilledDraw[ampl*Sin[t + phase], {t, 0, 4*Pi}, Fills -> fillcolor],
Text[StyleForm[mesg, FontColor -> ColorMix[fillcolor, Black][0.3]],
mesgloc, {-1, 0}]}
Draw2D[
{waveDraw[1, 20Degree, LightSteelBlue, wave1, {0.2, 1.1}],
waveDraw[0.6, 100Degree, Orchid, wave2, {5, 0.7}]},
Frame -> True,
Axes -> {True, False}, AxesFront -> True,
PlotRange -> {-1.1, 1.3},
Background -> Linen,
ImageSize -> 450]
The only thing to worry about is that a subsequent filled wave will cover
up
a previous text label. It might be preferable to put the text labels in the
Draw2D statement.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
I have a module that makes a filled plot. I use it to make two (or
more) displaced filled plots, then Show these plots (code below).
Everything works as expected, with two problems:
* The Curves->Back option never works.
* An additional line generated by an Epilog in the module only appears
in the final test plot.
I'm beginning to realize that when multiple plots each having an Epilog
are combined using Show or DisplayTogether, only the *final* Epilog gets
executed. Throwing in Evaluates at various stages in the process
doesn't seem to get around this.
If this is true, maybe the description of Epilog in the online Help
should say this? (Especially since it seems an intuitively *non*obvious
way for Epilog to behave -- shouldn't the stuff created by Epilog in a
Plot command become part of the plot once the plot command has been
executed?)
I don't have a clue why Curves->Back doesn't work anywhere or any way
I've tried it (including changing Front to Back in the online Help
example).
wavePlot[] := Module[{},
,
FilledPlot[ ,
Curves -> Back,
Epilog -> {Line[ ]},
DisplayFunction -> Identity]];
;
testPlot1 = wavePlot[ ];
;
testPlot2 = wavePlot[ ];
Show[testPlot1, testPlot2,
DisplayFunction -> $DisplayFunction,
PlotRange -> All];
===
Subject: Re: FilledPlot: Curves->Back option and Epilog not working?
If you collect (append them to some list, maybe called
epilogcollection) all of your epilog statements as you go along, you
can probably use Show[plot1,plot2,Graphics[epilogollection]] to output
all of the epilogs on the last chart.
> I have a module that makes a filled plot. I use it to make two (or
> more) displaced filled plots, then Show these plots (code below).
> Everything works as expected, with two problems:
> * The Curves->Back option never works.
> * An additional line generated by an Epilog in the module only appears
> in the final test plot.
> I'm beginning to realize that when multiple plots each having an Epilog
> are combined using Show or DisplayTogether, only the *final* Epilog gets
> executed. Throwing in Evaluates at various stages in the process
> doesn't seem to get around this.
> If this is true, maybe the description of Epilog in the online Help
> should say this? (Especially since it seems an intuitively *non*obvious
> way for Epilog to behave -- shouldn't the stuff created by Epilog in a
> Plot command become part of the plot once the plot command has been
> executed?)
> I don't have a clue why Curves->Back doesn't work anywhere or any way
> I've tried it (including changing Front to Back in the online Help
> example).
> wavePlot[] := Module[{},
> ,
> FilledPlot[ ,
> Curves -> Back,
> Epilog -> {Line[ ]},
> DisplayFunction -> Identity]];
> ;
> testPlot1 = wavePlot[ ];
> ;
> testPlot2 = wavePlot[ ];
> Show[testPlot1, testPlot2,
> DisplayFunction -> $DisplayFunction,
> PlotRange -> All];
--
Chris Chiasson
http://chrischiasson.com/
1 (810) 265-3161
===
Subject: Re: FilledPlot: Curves->Back option and Epilog not working?
$Version
5.1 for Mac OS X (January 27, 2005)
Needs[Graphics`];
Options[FilledPlot,Curves]
{Curves -> Back}
Curves->Back is the default. Compare
FilledPlot[{Sin[x],Cos[x],x^2/18},{x,0,2 Pi}];
FilledPlot[{Sin[x],Cos[x],x^2/18},{x,0,2 Pi},
Curves->Front];
Use Epilog in the DisplayTogether or with the Show
DisplayTogether[
plt1=FilledPlot[x^2/18,{x,0,2 Pi}],
plt2=FilledPlot[{Sin[x],Cos[x]},{x,0,2 Pi}],
Epilog->Text[This is the epilog,{2,1.5}]];
Show[plt1,plt2,
DisplayFunction->$DisplayFunction,
Epilog->Text[This is the epilog,{2,1.5}]];
Or carry the Epilog forward using AbsoluteOptions
DisplayTogether[
plt1=FilledPlot[x^2/18,{x,0,2 Pi},
Epilog->Text[This is the epilog 1,{2,1.5}]],
plt2=FilledPlot[{Sin[x],Cos[x]},{x,0,2 Pi},
Epilog->Text[This is the epilog 2,{2,2}]],
Epilog->{Epilog/.AbsoluteOptions[plt1],
Epilog/.AbsoluteOptions[plt2]}];
Show[plt1,plt2,
DisplayFunction->$DisplayFunction,
Epilog->{Epilog/.AbsoluteOptions[plt1],
Epilog/.AbsoluteOptions[plt2]}];
Bob Hanlon
===
> Subject: FilledPlot: Curves->Back option and Epilog not
working?
> I have a module that makes a filled plot. I use it to make two (or
> more) displaced filled plots, then Show these plots (code below).
> Everything works as expected, with two problems:
> * The Curves->Back option never works.
> * An additional line generated by an Epilog in the module only appears
> in the final test plot.
> I'm beginning to realize that when multiple plots each having an Epilog
> are combined using Show or DisplayTogether, only the *final* Epilog gets
> executed. Throwing in Evaluates at various stages in the process
> doesn't seem to get around this.
> If this is true, maybe the description of Epilog in the online Help
> should say this? (Especially since it seems an intuitively *non*obvious
> way for Epilog to behave -- shouldn't the stuff created by Epilog in a
> Plot command become part of the plot once the plot command has been
> executed?)
> I don't have a clue why Curves->Back doesn't work anywhere or any way
> I've tried it (including changing Front to Back in the online Help
> example).
> wavePlot[] := Module[{},
> ,
> FilledPlot[ ,
> Curves -> Back,
> Epilog -> {Line[ ]},
> DisplayFunction -> Identity]];
> ;
> testPlot1 = wavePlot[ ];
> ;
> testPlot2 = wavePlot[ ];
> Show[testPlot1, testPlot2,
> DisplayFunction -> $DisplayFunction,
> PlotRange -> All];
===
Subject: Adding two numbers of high precision results in a number of low
precision??
I have the following question. In a pretty long code (so I won't send
everything) at some point the following loop occurs:
> For[i=1,i<=3,i++,
> bTemp[i,0,j]=bTemp[i,0,0]+Q[i,j];
> Print[Precisie bTemp[,i,,0,0] = ,Precision[bTemp[i,0,0]]];
> Print[Precisie Q[,i,,,j,] = ,Precision[Q[i,j]]];
> Print[Precisie bTemp[,i,,,0,,,j,] =
,Precision[bTemp[i,0,j]]];
> ]; (* einde For *)
(Precisie is Dutch for precision ;)), where the j is a loop counter
and the Q[.,.]'s and bTemp[.,0,0]'s are known numbers of a certain
precision. Now two pieces of the output of this piece of code look like
this:
> Precisie bTemp[2,0,0] = 397.142
> Precisie Q[2,1] = 397.172
> Precisie bTemp[2,0,1] = 395.193
and
> Precisie bTemp[3,0,0] = 389.685
> Precisie Q[3,1] = 390.729
> Precisie bTemp[3,0,1] =53.8232
Now the first one makes sense, but the last one, how is it possible that
if I add two numbers of precision ca. 390 I get something of precision
53 back? I hope somebody could explain, because there are numerical
problems in my code that mess things up and I'm afraid the stuff above
could have to do with it...
--
Kees van Schaik
Frankfurt MathFinance Institute
J.W. Goethe-Universitaet
Frankfurt am Main
Tel: +49 (0)69 79823453
WWW: http://ismi.math.uni-frankfurt.de/vanSchaik/
===
Subject: Partitioning a list from an index
hello,
how can partition a list (doesn't have to be with 'Partition') such that the
partition will begin with a specific index?
like this:
list1={1,2,3,4,5,6,7,8,9,10}
and i might want to split it to pairs or threes from index 6 for example, so
for threes it will be
{1,2,{3,4,5},{6,7,8},9,10}
with or without padding to those that are left, depends on what it is
possible, or maybe put them together or something
for pairs it will be {1,{2,3},{4,5},{6,7},{8,9},10}
again with or without padding
thank you very much
Guy
===
Subject: Re: Re: Mathematica Notebook Organiztion
Tony,
With your experience and knowledge, I think what you write has to be taken
seriously.
But my question is: What other document can meld text, interactive
calculations, graphics and animations? That's why I think CAS notebooks are
the ideal medium for technical communication. I would much prefer to do all
these things in one program. I don't see how it is easier to have to master
three or four different applications.
Furthermore, Mathematica notebooks can be kept relatively short because the
output cells can be omitted. Readers, if they have Mathematica, can
regenerate the cells. If one uses pdf documents, say, then there is no
calculational interaction. All the graphics and output cells must be
included and these take a lot of space. I've been told that the physics
archive objects if the files are too long. (Length of files is important to
me because I don't have broadband yet.)
Communicating with people through Mathematica notebooks is pure heaven for
me. The problem is that this is not the universal standard. Many people
don't have Mathematica or dislike it. I don't see yet that Mathematica web
pages are a good method, and the ones that I have seen so far are not
interactive. (Perhaps I don't know how to use them yet. Can you copy them
and paste directly into a private Mathematica notebook?) In the technical
world there are probably going to be huge battles in trying to establish a
standard. Maybe it will never get sorted out to the complete convenience of
users.
In the meantime, I hope WRI will keep working on the notebook interface and
make it better and more intuitive as a method of creating interactive
technical documents.
David Park
djmp@earthlink.net
http://home.earthlink.net/~djmp/
I don't claim to be a guru on any of this, but I do claim to have a
very large amount of ordinary user experience (multiple decades of
experience) with (a) markup systems for presentation of technical
material (books, reports, class notes, seminar slides), and (b) software
for extensive numerical and symbolic computation and preparation of
graphics.
Based on this long experience, when I read about
. . .the same markup language that describes text layout and
formatting [being] the language [that is ] used to [do calculations,
create graphics, and] execute (inter)active content . .
count me as skeptical -- VERY skeptical. This is a very BAD idea, that
will inevitably cause much more damage than the dubious and limited
benefits it may produce.
Basically, I'd argue that attempting to combine both of these quite
different functions into a single language or package and a single user
interface is an absolute guarantee that the language and the system and
the interface will all become so complex, so convoluted, so hard to
learn and use and remember between uses, that ordinary users (meaning,
e.g., ordinary working engineering and science professionals) will
abandon such a system for simpler individual tools with easily
interchangable file formats which will enable them perform these two
separate functions separately, much more easily, with much less of a
learning curve, and with enormously less aggravation.
I think this concept of having such a single, universal language keeps
emerging (mostly among computer types?) because it poses real and
difficult and very interesting intellectual challenges to computer types
just to accomplish this -- and that's fine; intellectual challenges are
what creative people live for, and no one can blame or criticize
computer types for being challenged by these goals.
The problem is, the *only* advantage of such a unified tool *for the
user*, so far as I can see, is that you only have to double-click on one
icon to start if up; and the difficulties it then produces for ordinary
users are immense and unending. I won't attempt to list at this point
all the different ways these difficulties arise (inherently, and
unavoidably) in such a system; but if this debate continues I may get
motivated to respond with such a list.
I love Mathematica, I love TeX, I'm very fond of Acrobat and Illustrator
and BBEdit and . . . but the more people try to stuff the capabilities
of all of these into Mathematica, the surer I am that this is a terrible
idea.
--Tony Siegman, Stanford University
===
Subject: Re: Calling a MS-DOS command
Look up Run in the HelpBrowser.
> Hi everyone,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
--
Murray Eisenberg murray@math.umass.edu
Mathematics & Statistics Dept.
Lederle Graduate Research Tower phone 413 549-1020 (H)
University of Massachusetts 413 545-2859 (W)
710 North Pleasant Street fax 413 545-1801
Amherst, MA 01003-9305
===
Subject: Re: Calling a MS-DOS command
Hi Swati,
You can call them:
CopyFile[filea, fileb] copies filea to fileb
DeleteFile[tmp.txt]
And you can find more on these in the help browser under the system
interface
Namrata
> Hi everyone,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
===
Subject: Re: Calling a MS-DOS command
Hi Luc,
Is it also possible to call a non-standard command from DOS in
Mathematica? I have a software that I would like to run (from
mathematica), and to run it one does it using a dos command. And this
command is what i would like to call in from Mathematica.
Namrata
> In the help system under 'System interface':'File system' you will find:
> DeleteFile
> CopyFile.
> You can also find in 'System interface':'External Commands' the function
> Run
> -----Original Message-----
===
> Subject: Calling a MS-DOS command
> Hi everyone,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
===
Subject: Re: Calling a MS-DOS command
Hi Swati,
> I would like to call some DOS commands from Mathematica like
> delete (tmp.txt)
> copy filea to fileb
> Is this possible in Mathematica? I have been looking at the
> documentation but am not having a lot of luck.
Really?
Your vision notwithstanding, on my XP system, Run[ls] does what is
intended -- open a console (not a DOS command), run the command ls,
then
close the console. I know that the command was successful since the return
code was 0. In the unlikely event that you have MKS utilities ,
Run[dir]
should do something similar on your system.
Another command to try would be
Run[c:WindowsSystem32cmd.exe], which
opens a console on XP systems.
Dave.