A80
====
Subject: Re: Creating understandable function graphs?
> Then the following n'th degree (Bernstein) polynomial
> will pass through each of the n+1 points (x_k,y_k)
> (k = 0, 1, 2, ..., n):
SUM(k=1 to n) of (y_k)*C(n,k)*(x^k)*(1x)^(nk),
where C(n,k) is the binomial coefficient associated
> with n and k (i.e. C(n,k) is the number of k element
> subsets that can be chosen from a set of n elements).
Ooops, the polynomial expansion should have been
SUM(k=0 to n) of (y_k)*C(n,k)*(x^k)*(1x)^(nk).
Also, the expansion is at the mathworld webpages:
http://mathworld.wolfram.com/BernsteinExpansion.html
====
Subject: Re: Equal arm balance vs platform scale
...
>> Any scale that simply compares masses does not need to be
calibrated.
>> Any scale that measures weight must be calibrated for the location
>> where it will be used. It's that simple.
>>
>> I am impressed. What is your point then?
>
> Why are you impressed if you don't know the point? You're just being
> sarcastic.
>
> Well spotted.
>
> Actually, I was impressed you know the difference between scales. I still
> dont understand the point behind your threads.
Don thinks that you can only do correct physics in the British
Gravitational
System. (At least, that is the best I can understand from these
discussions.)
He refuses (or is not able) to see that when you change the coordinate
system
(in this case the basic units), you still get the same physics.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland,
+31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
====
Subject: Re: Longest day of year?
[...]
>>3.3 For the places between the tropics, A's maximum is a nontrivial
>>function of the latitude. The cause of this aparent discontinuity in
>>the ecuations is that in intertropical places, there are some days that
>>the sun passes over the cenit (and this day A is maximum). There is
>
> ^^^^^
> zenith
>
> This is what I dispute. I do not believe A (as I defined
> it above) is maximum on that day. But as yet I don't have
> the equations to back up this claim, and I freely admit
> that there may be something here that I've missed.
I agree with Russell here. For example, at 15 degrees North,
the longest day falls very near the summer solstice.
I downloaded Sungraph, which displays the illuminated part of
the world, the graphs of sunrise/sunset lines, etc.
Cf.:
http://www.analemma.com/SunGraph/index.html
(or, see http://www.time.gov/ > select Time zone
for currently illuminated part of earth).
Bernier
====
Subject: Re: asymptotic convergence when the limit is divergent
>
>
>>If the limit as t>infinity f(t)/g(t) is 1, then we say f(t) is
>>asymptotic to g(t), and we write f(t) ~ g(t). Whether the limits of f
>>or g exist individually is irrelevant. Note that ~ is an
equivalence
>>relation.
>
>
> What is said about f(t)=1/t as t > infinity? It approaches 0
> sympotically (not asymptotically)?
>
> Joe
>
> It is of course asymptotic to g(t)=0
I think his point is that f(t)/g(t) either (a) doesn't exist [as when f
= 1/t, g = 0] or (b) doesn't go to 1 [as when f = 0, g = 1/t].
I dunno. IS 1/t asymptotic to 0 as t > infty? Not according to the
usual definition...
Ron Bruck
====
Subject: Re: asymptotic convergence when the limit is divergent
<87zmth3c2m.fsf@konline.com>
<230620051855414459%bruck@math.usc.edu> It is of course asymptotic to
g(t)=0
I think his point is that f(t)/g(t) either (a) doesn't exist [as when f
> = 1/t, g = 0] or (b) doesn't go to 1 [as when f = 0, g = 1/t].
Exactly. This seems counterintuitive (at least to me). The family of
curves f_k(x) = k+1/x is asymptotic to k (more precisely, g_k(x) = k)
as x goes to infinity except for k=0, where there is no asymptote.
Joe
====
Subject: Re: asymptotic convergence when the limit is divergent
XEnigmailVersion: 0.92.0.0
>
>
>It is of course asymptotic to g(t)=0
>>I think his point is that f(t)/g(t) either (a) doesn't exist [as when f
>>= 1/t, g = 0] or (b) doesn't go to 1 [as when f = 0, g = 1/t].
>
>
> Exactly. This seems counterintuitive (at least to me). The family of
> curves f_k(x) = k+1/x is asymptotic to k (more precisely, g_k(x) = k)
> as x goes to infinity except for k=0, where there is no asymptote.
>
> Joe
http://mathworld.wolfram.com/Asymptote.html

====
Subject: Re: asymptotic convergence when the limit is divergent
>
>
>
>> If the limit as t>infinity f(t)/g(t) is 1, then we say f(t) is
>> asymptotic to g(t), and we write f(t) ~ g(t). Whether the limits of f
>> or g exist individually is irrelevant. Note that ~ is an
equivalence
>> relation.
>
>
> What is said about f(t)=1/t as t > infinity? It approaches 0
> sympotically (not asymptotically)?
>
> Joe
>>
>> It is of course asymptotic to g(t)=0
>
> I think his point is that f(t)/g(t) either (a) doesn't exist [as when f
> = 1/t, g = 0] or (b) doesn't go to 1 [as when f = 0, g = 1/t].
>
> I dunno. IS 1/t asymptotic to 0 as t > infty? Not according to the
> usual definition...
Dunno the english writing, but I'd say f(t) is asymptotic to 0 (since
f(t)g(t) > 0), and it's equivalent to 1/t, or 1/(t+1) or ...
====
Subject: Re: asymptotic convergence when the limit is divergent
<87zmth3c2m.fsf@konline.com>
<230620051855414459%bruck@math.usc.edu>
f(t)g(t) > 0), and it's equivalent to 1/t, or 1/(t+1) or ...
I agree that's what I'd probably say, but it's not correct by the
usual definition. Also note that f(x) = x and g(x) = x+1 are
asymptotic as x approaches infinity even though f(x)  g(x) = 1 for
all x.
Joe
====
Subject: Re: asymptotic convergence when the limit is divergent
>
>> Dunno the english writing, but I'd say f(t) is asymptotic to 0 (since
>> f(t)g(t) > 0), and it's equivalent to 1/t, or 1/(t+1) or ...
>
> I agree that's what I'd probably say, but it's not correct by the
> usual definition. Also note that f(x) = x and g(x) = x+1 are
> asymptotic as x approaches infinity even though f(x)  g(x) = 1 for
> all x.
I was talking about asymptotes, when doing asymptotic expansion, there is
another definition, you are right. All of this is a question of vocabulary
and good definition I think.
====
Subject: Re: Irrep Weirdness
>
> Did you get
> 1,1,7,7,14,14,20,20,21,21,28,28,35,35,42,56,56,64,64,70,70,90
> for the degrees of the (nonweird) irreducible representations?
> That's what's in Sloane.
>
> I already found where I goofed:
> S3
> 1 3 2
> A1 1 1 1
> A2 11 1
> E 2 01
> X 1 11
> Clearly X has integer characters and sum(char**2*class)=6,
> but is no combination integer*A1+integer*A2+integer*E.
> So, what is the correct sufficient condition that a
> set of integer entries is a representation at all?
X is also not orthogonal to A1, whereas you claimed
that your degreezero representation was orthogonal
to all the (other) irreducible representations. If
you really had all the irreducible representations,
you couldn't have a nonzero vector orthogonal to
all of them.
As to the sufficient condition for a representation,
it must be a nonnegative integer linear combination
of the irreducible ones  I think you already know that,
but I'm not sure what else you want. To put the same
thing another way, its character must have a nonnegative
integer inner product with each irreducible character.
If you mean, how can I look at a vector and decide,
without knowing all the irreducibles, whether the vector
is a character  I don't know. I'm not sure there is
any way to do it at all (but I am very far from being
an authority on the matter).

Gerry Myerson (gerry@maths.mq.edi.ai) (i > u for email)
====
Subject: Re: Discrete Optimization
XRFC2646: Format=Flowed; Response
>> Interesting problem. At first it seems trivial until you realize why
>> your second rough solution is indeed suboptimal. Have you tried any
>> greedy algorithms? One idea I had is that you could replace your
>> decision variables by binary variables d_i where d_i = 1 iff b_i !=
>> b_(i+1) since the optimal b_j can be recovered from the d_i. This
>> converts it into a binary integer programming problem (although I don't
>> think a linear one) which might be easier to solve than the original
>> formulation.
>> A related idea is to consider it as some sort of partition problem: you
>> are partitioning n into blocks of size >= m (except for the last
>> block?) where you have a cost associated with each partition. You could
>> perhaps concentrate on setting the boundaries between the blocks in an
>> optimal way. Maybe do that first for a fixed number of blocks and if
>> you get a decent solution just iterate over all possible numbers of
>> blocks (n/m maybe plus or minus 1)
>> Wish I could be more helpful.
>> John Coleman
>> can anybody help me with this optimization problem
> (algorithm, references, related problems):
>> given: 1.) finite sequence of nonnegative integers, a_0, ... a_n
> 2.) integer m < n
> wanted: b_0, b_n
> constraints: 1.) b_i >= a_i
> 2.) If b_{j+1} != b_j,
> then j>=m1 and b_{jk} = b_j for all k=1, ..., m1
> objective: Minimize sum b_j
>> Typical problem size:
> n between 100 and 200
> m between 2 and 50
> max_i a_i = 4
>> Trivial (suboptimal) solution:
> b_i = max_j a_j for all i
>> Almost trivial (still suboptimal) solution:
> Divide 0, ..., n into blocks of length m. In every block set all b_i
> to the maximum of the a_j there.
>> Onno
> Indeed, the problem can be formulated as an integer linear program as
> follows:
> minimize sum {j in 0..n} b[j] subject to
b[i] >= a[i] for i in 0..n
b[j+1] = b[j] for j in 0..m2;
(amaxa[j])*z[j] <= b[j+1]  b[j] <= (amaxa[j])*z[j] for j in m1..n1,
> k in 1..m1, where amax = max_i a[i]
sum {k in j..min(j+m1,n1)} z[k] <= 1 for j in m1..nm
z[j] in {0,1} for j in m1..n1
> Note that z[j] = 0 forces b[j+1] = b[j], and z[j] = 1 imposes only
> redundant bounds on b[j+1]  b[j]. The net effect of the constraints
> involving z is that b can change values at most once during each
> consecutive minterval.
> But I think a better approach for your sequential decision problem is to
> formulate it as a dynamic program, equivalently, as a shortest path
> problem. I will work out the details and post again later.
> Rob
First, a correction to the integer linear programming formulation given
above:
The lower bound on b[j+1]  b[j] should have a[j+1] instead of a[j]. That
is,
(amaxa[j+1])*z[j] <= b[j+1]  b[j].
Now the dynamic programming formulation:
Let v[k, s] be the optimal sum {i in k..n} b[k], given that b[k1] = s.
Then we want to compute
m*initialS + v[m, initialS], where initialS = max {i in 0..m1} a[i].
Boundary condition:
v[n, s] = a[n] for all s.
(We can always take b[n] = a[n].)
Main recursion:
v[k, s] = min {if[s >= a[k], s + v[k + 1, s], infinity],
min_t {(min(m, n  k)*t + v[k + min(m, n  k), t]}},
where the min_t is taken over all t (not equal to s) such that
t >= max {i in k..min(k + m  1, n)} a[i].
The idea behind the recursion is as follows. Given b[k1] = s, we choose
either b[k] = s (unavailable if s < a[k]) or b[k] = t, where t >= the next m
a[i]'s. If we take b[k] = s, we incur an immediate cost of s and then
choose b[k+1],...,b[n] optimally. If instead we take b[k] = t for some t
not equal to s, we incur an immediate cost of t for each of
b[k], ..., b[min(k+m1,n)] and then choose the remaining b[j]'s optimally.
The optimal b[j]'s can be recovered from the values of v[k,s], for example,
by keeping track of the argmin at each stage.
Rob
====
Subject: Re: Discrete Optimization
XRFC2646: Format=Flowed; Response
> Interesting problem. At first it seems trivial until you realize why
> your second rough solution is indeed suboptimal. Have you tried any
> greedy algorithms? One idea I had is that you could replace your
> decision variables by binary variables d_i where d_i = 1 iff b_i !=
> b_(i+1) since the optimal b_j can be recovered from the d_i. This
> converts it into a binary integer programming problem (although I don't
> think a linear one) which might be easier to solve than the original
> formulation.
>> A related idea is to consider it as some sort of partition problem: you
> are partitioning n into blocks of size >= m (except for the last
> block?) where you have a cost associated with each partition. You could
> perhaps concentrate on setting the boundaries between the blocks in an
> optimal way. Maybe do that first for a fixed number of blocks and if
> you get a decent solution just iterate over all possible numbers of
> blocks (n/m maybe plus or minus 1)
>> Wish I could be more helpful.
>> John Coleman
> can anybody help me with this optimization problem
>> (algorithm, references, related problems):
>> given: 1.) finite sequence of nonnegative integers, a_0, ...
a_n
>> 2.) integer m < n
>> wanted: b_0, b_n
>> constraints: 1.) b_i >= a_i
>> 2.) If b_{j+1} != b_j,
>> then j>=m1 and b_{jk} = b_j for all k=1, ..., m1
>> objective: Minimize sum b_j
>> Typical problem size:
>> n between 100 and 200
>> m between 2 and 50
>> max_i a_i = 4
>> Trivial (suboptimal) solution:
>> b_i = max_j a_j for all i
>> Almost trivial (still suboptimal) solution:
>> Divide 0, ..., n into blocks of length m. In every block set all b_i
>> to the maximum of the a_j there.
>> Onno
>> Indeed, the problem can be formulated as an integer linear program as
>> follows:
>> minimize sum {j in 0..n} b[j] subject to
>> b[i] >= a[i] for i in 0..n
>> b[j+1] = b[j] for j in 0..m2;
>> (amaxa[j])*z[j] <= b[j+1]  b[j] <= (amaxa[j])*z[j] for j in m1..n1,
>> k in 1..m1, where amax = max_i a[i]
>> sum {k in j..min(j+m1,n1)} z[k] <= 1 for j in m1..nm
>> z[j] in {0,1} for j in m1..n1
>> Note that z[j] = 0 forces b[j+1] = b[j], and z[j] = 1 imposes only
>> redundant bounds on b[j+1]  b[j]. The net effect of the constraints
>> involving z is that b can change values at most once during each
>> consecutive minterval.
>> But I think a better approach for your sequential decision problem is to
>> formulate it as a dynamic program, equivalently, as a shortest path
>> problem. I will work out the details and post again later.
>> Rob
> First, a correction to the integer linear programming formulation given
> above:
> The lower bound on b[j+1]  b[j] should have a[j+1] instead of a[j]. That
> is,
> (amaxa[j+1])*z[j] <= b[j+1]  b[j].
> Now the dynamic programming formulation:
Let v[k, s] be the optimal sum {i in k..n} b[k], given that b[k1] = s.
> Then we want to compute
> m*initialS + v[m, initialS], where initialS = max {i in 0..m1} a[i].
Boundary condition:
> v[n, s] = a[n] for all s.
> (We can always take b[n] = a[n].)
Main recursion:
> v[k, s] = min {if[s >= a[k], s + v[k + 1, s], infinity],
> min_t {(min(m, n  k)*t + v[k + min(m, n  k), t]}},
> where the min_t is taken over all t (not equal to s) such that
> t >= max {i in k..min(k + m  1, n)} a[i].
The idea behind the recursion is as follows. Given b[k1] = s, we choose
> either b[k] = s (unavailable if s < a[k]) or b[k] = t, where t >= the next
> m a[i]'s. If we take b[k] = s, we incur an immediate cost of s and then
> choose b[k+1],...,b[n] optimally. If instead we take b[k] = t for some t
> not equal to s, we incur an immediate cost of t for each of
> b[k], ..., b[min(k+m1,n)] and then choose the remaining b[j]'s
optimally.
The optimal b[j]'s can be recovered from the values of v[k,s], for
> example, by keeping track of the argmin at each stage.
> Rob
Another correction to the integer linear program: The clique
inequalities
sum {k in j..min(j+m1,n1)} z[k] <= 1
should be for j in m1..n1 (instead of j in m1..nm). If m > n/2, my
original formulation had too few inequalities.
Rob
====
Subject: Re: Euclidean Geometry in Schools
One of the best courses I ever took was in high school in the 50's, a
> sophomore course in Plane Geometry taught by a super teacher named Don
> Joslin in Rapid City SD. He made the course interesting and
> challenging enough that students would actually come to his room after
> school to work problems and help others.
In more recent times I was dismayed to discover when my kid when
> through high school that geometry and trigonometry are just given
> passing attention in integrated courses. And to make matters worse,
> nobody teaches analytic geometry.
>
Indeed, more time is spend on integrating courses than on integration.
> It doesn't take a genius to see why so many of our high school
> graduates come to the universities so woefully prepared in
> mathematics.
>
Integration is for Yankees, differentiation for confederates?
====
Subject: Re: Euclidean Geometry in Schools
>> In more recent times I was dismayed to discover when my kid when
>> through high school that geometry and trigonometry are just given
>> passing attention in integrated courses. And to make matters worse,
>> nobody teaches analytic geometry.
Wow ! Trigonometry, I could understand, but geometry and analytic geometry
!
====
Subject: Re: Euclidean Geometry in Schools
Most of us would like to see
> Euclidean geometry in schools.
Would you like to teach it?
>
I've none of the required requirements required to teach.
====
Subject: Re: Euclidean Geometry in Schools
.....................
> This PDF discusses a french reform of the teaching of mathematics during
the
> seventies, which was a complete fiasco. If you can read french, I think
it's
> worth reading that :)
I can read French, but I do not feel like going through 63
pages of philosophy. Frankly, I do not think that
Dieudonne quite got the conceptual idea of abstract
mathematics, as there was much of Bourbaki which is, from
the standpoint of someone used to abstract topology and
measure theory, rather concrete. This does not mean that
special cases are not useful, but the real understanding is
not there.
>> Even after quickly skimming that chapter, I envy the French for
>> being willing to admit faults of the New Maths. In New Zealand the
>> reformers of around 1970 never admitted their mistakes, so our school
>> curriculum has had to evolve slowly and painfully from that stage, and
>> IMHO needs to evolve a lot more.
In the US, the program was HIGHLY tested, and succeeded
quite well when those who could understand the concepts
taught it. I doubt that any innovation has been that
well tested. It was a real surprise that there was a
problem with the teachers understanding it, and this has
even gotten worse. We will never get good teaching of
mathematics done if those teaching it cannot understand
anything other than memorization of facts and grinding
out of answers. They may think they are teaching such
concepts as commutativity by giving the word and a
formal definition, but unless one is a candidate for
a PhD, understanding is not likely to occur if that is
all that is done with it.
>Sad. But it's what happens when we let mathematicians take education
>decisions. That's not their job!
It might not be their job, but those who are now making
the decisions have NO understanding of mathematics. In
fact, they have no understanding of any structured
subject; grammar is not taught, either, and history has
become the Marxist idea that one needs to know only what
the condition of the peasants was at each time.

This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu Phone: (765)4946054 FAX: (765)4940558
====
Subject: Re: Euclidean Geometry in Schools
>
>
>
>
> .....................
>
>> This PDF discusses a french reform of the teaching of mathematics
during
>> the
>> seventies, which was a complete fiasco. If you can read french, I
think
>> it's
>> worth reading that :)
>
> I can read French, but I do not feel like going through 63
> pages of philosophy.
You are forgiven :)
> Frankly, I do not think that
> Dieudonne quite got the conceptual idea of abstract
> mathematics, as there was much of Bourbaki which is, from
> the standpoint of someone used to abstract topology and
> measure theory, rather concrete.
In France they are considered rather abstract :) But I don't know what a
french topology expert would think.
> This does not mean that
> special cases are not useful, but the real understanding is
> not there.
>
> Even after quickly skimming that chapter, I envy the French for
> being willing to admit faults of the New Maths. In New Zealand the
> reformers of around 1970 never admitted their mistakes, so our school
> curriculum has had to evolve slowly and painfully from that stage, and
> IMHO needs to evolve a lot more.
>
> In the US, the program was HIGHLY tested, and succeeded
> quite well when those who could understand the concepts
> taught it. I doubt that any innovation has been that
> well tested.
I was not born, but from what I read or was told, I agree it was not very
well tested.
> It was a real surprise that there was a
> problem with the teachers understanding it, and this has
> even gotten worse. We will never get good teaching of
> mathematics done if those teaching it cannot understand
> anything other than memorization of facts and grinding
> out of answers. They may think they are teaching such
> concepts as commutativity by giving the word and a
> formal definition, but unless one is a candidate for
> a PhD, understanding is not likely to occur if that is
> all that is done with it.
>
>> Sad. But it's what happens when we let mathematicians take education
>> decisions. That's not their job!
>
> It might not be their job, but those who are now making
> the decisions have NO understanding of mathematics. In
> fact, they have no understanding of any structured
> subject; grammar is not taught, either, and history has
> become the Marxist idea that one needs to know only what
> the condition of the peasants was at each time.
Well... Here teachers have at least a BSc and often a master's degree,
and teachers teachers are almost always teachers as well :)
But I don't know who *really* take decisions, that's rather obscure...
I think usually, groups of teachers and scientists give their advice, then
a
minister take a decision.
====
Subject: Re: Euclidean Geometry in Schools
those who could understand the concepts taught it. I doubt that any
> innovation has been that well tested. It was a real surprise that there
> was a problem with the teachers understanding it, and this has even
> gotten worse. We will never get good teaching of mathematics done if
> those teaching it cannot understand anything other than memorization of
> facts and grinding out of answers. They may think they are teaching
> such concepts as commutativity by giving the word and a formal
> definition, but unless one is a candidate for a PhD, understanding is
> not likely to occur if that is all that is done with it.
>
To be a math teacher is to study education.
Well then shut up and go teach education.
Now replacing US education theory educated teachers, are foreign teachers
who went to school, not for social promotions, but to learn.
> It might not be their job, but those who are now making the decisions
> have NO understanding of mathematics. In fact, they have no
> understanding of any structured subject; grammar is not taught, either,
> and history has become the Marxist idea that one needs to know only what
> the condition of the peasants was at each time.
>
They understand education theory with a minor in office politics.
I've seen two teachers driven out of schools because they were too
creative in a school that, even tho students got excellent test scores,
was reshaped according to education theory instead of acknowledging the
innovative high school was a learning success.
====
Subject: Re: Craps probability question
I'm for letting WinCraps run the probabilty of getting those 9 hands of
50rolls and then have the statistical estimation proponents tweak their
methods until their results more closely match Wincraps results.
This thread on probability highlights the difference between two
methods: estimation versus experimentation. I'm glad the original
in WinCraps. But we know in this newsgroup, people report that
statistical estimates and actual outcomes at the craps tables do not
always match; just look at the threads about dicesetting.
So why is there a bit of a lack of consensus between the statistical
estimating methods in this thread? Why, in general, is there a bit of
lack of concensus between statistical estimates and realworld
observations? I've found sometimes that population as defined by
statistical methods users too often does not represent a complete
listing of all possible events associated with a set of experimental
units. In other words, their sample space needs to be redefined. Even
when these methods have accurately defined population or sample space
these methods often lack feedback, lack data to compare their results
against. In the case of craps we have WinCraps as feedback. So I
submit that statistical methods users in this thread such as scott,
reeffish, stephen, richard, dragon among others could reexamine how they
defined population or sample space, and use WinCraps results for
feedback for tweaking purposes. The others such as myself, cat_in_awe,
spuddemon, midknightskulker, timmyrocker among others seem to believe
actual dice outcomes. alanshank, of course, walks effortlessly between
both methodological camps.
Trout

 ]]]]]]]:
/ /
>
>
> At the end of a History Channel show about craps they touted some
results of
> 'expert' dice rollers. One of the metrics they showed was the number
of
> times they throw the dice (rolls) before they sevenout. (This can be
called
> one hand: receiving the dice until they sevenout.)
>
> My response is, So, the guy didn't 7 out for fifty rolls. Okay. That
> doesn't necessarily mean he tossed fortynine numbers before a 7.
>
> It wouldn't serve these flimflammers interests to admit they tossed
> several natural 7s along the way to their astounding fiftyplus roll
> hands before they 7 out. Why, that might put them smackdab in
> Averageville.
>
> When I'm at a table and I happen to have a decent roll of twenty or
> more tosses before the 7 out, I just count the number of tosses before
> the 7 out. If I have a few natural 7s, I don't start the count all
> over again, because I'm still holding the dice. I have a strong
> suspicion the same holds true for the Peashooters.
>
> Peashooters who say they can produce 7s on the comeout would count
> those comeout 7s as successfully controlling/influencing the dice, so
> of course they'd add them to their tote board of throws before the 7
> out.
>
> Lady Shooter
====
Subject: Re: Cantor and the binary tree
XAbuseNotes: Abuse reports must be submited via the usenetabuse.com portal
listed above.
XAbuseNotes2: Reports sent via any other method will not be processed.
>
>
>
>>The natural numbers are the entities defined by Peano arithmetic, and one
can
>>prove using the Peano postulates that every natural number is finite. But
the
>>*set* of natural numbers is infinite. I don't see a problem here.
>
>
> If we find by induction that every even number 2n is larger than the
> cardinal number n of its set {2,3,6,...,2n} then we see for *every even
> number* that the cardinal number n of its set cannot surpass its value
> 2n. This proof holds, by induction, for every even number. But in the
> whole set of all even numbers there is nothing else but even numbers.
> So it is impossible that its cardinality is larger than any even
> number. Attention: This proof by induction uses only finite even
> numbers. It does not use the whole set of even numbers. But its result
> concern this set, because this set contains solely finite even numbers.
>
Not quite. The result concerns the *elements* of the set, which are
distinct
from the set itself.
Matt
>
====
Subject: Re: Cantor and the binary tree
On Wed, 22 Jun 2005 10:31:39 0400, Tony Orlow (aeo6)
>Virgil said:
>> Excuse me Martin, but maybe you should have some of what I am smoking.
Every
>> path ends in a leaf node, which are half the nodes in the tree. You
start
>> with
>> one node that represents the root path. For each pair of nodes, you
create a
>> new path. A finite tree with n levels (including the root) has (2^n)1
nodes,
>> (2^n)2 branches, and only 2^(n1), or (2^n)/2 paths, as denoted by its
leaf
>> nodes. This relationship is preserved through infinity, even in the
absence
>> of
>> identifiable leaf nodes.
>>
>> TO is right for finite trees but wrong for maximal binary trees.
>No, this property holds for all balanaced binary trees.
Can you prove that this relationshiop is preserved through infinity?
That seems to be the real sticking point here.
>> For any finite tree there is at least one more node than there are
>> paths. If every path shares the root node, eachfinite path will also
>> have a leaf node, so there is at least one extra node.
>Incorrect. There is one more node than branches, and twice as many branches
as
>paths.
for finite trees this is true. For infinite trees it no longer holds.
>> TO's mistake is to presume that what happens for finite trees must also
>> be the case for (infinite) maximal binary trees. It is a presumption
>> that can, in fact, be disproved, and has been disproved in these
threads.
>No it has not. Your various interpretations of the branches in the tree
don't
>prove things about the structure of the tree itself, and your mishmoshed
>judgement of countability doesn't hold water for me anyway. Have you
ever
>actually USED a binary tree for anything? I mean, weren't you the one that
told
>me one can't insert a node in the middle of the tree, when computers do it
>every single day? Drop the bijections, and inspect the structure of the
tree.
The trees do not change anymore than 3 changes to 4 when incremented.
(I.e. the value of a variable changes to *different* tree when you
insert, the tree does not.)
>>
====
Subject: Re: Cantor and the binary tree
> On Wed, 22 Jun 2005 10:31:39 0400, Tony Orlow (aeo6)
>
>Virgil said:
>> Excuse me Martin, but maybe you should have some of what I am
smoking.
>> Every
>> path ends in a leaf node, which are half the nodes in the tree. You
>> start
>> with
>> one node that represents the root path. For each pair of nodes, you
>> create a
>> new path. A finite tree with n levels (including the root) has
(2^n)1
>> nodes,
>> (2^n)2 branches, and only 2^(n1), or (2^n)/2 paths, as denoted by
its
>> leaf
>> nodes. This relationship is preserved through infinity, even in the
>> absence
>> of
>> identifiable leaf nodes.
>>
>> TO is right for finite trees but wrong for maximal binary trees.
>No, this property holds for all balanaced binary trees.
>
> Can you prove that this relationshiop is preserved through infinity?
> That seems to be the real sticking point here.
>
>> For any finite tree there is at least one more node than there are
>> paths. If every path shares the root node, eachfinite path will also
>> have a leaf node, so there is at least one extra node.
>Incorrect. There is one more node than branches, and twice as many
branches
>as
>paths.
>
> for finite trees this is true. For infinite trees it no longer holds.
Actually, it is not even true for finite trees.
For a binary tree in which every path is only one branch long, there the
same number of paths as branches, namely 2.
For a binary tree in which each path is two branches long, there are 4
paths and 6 branches.
If each path is 3 branches, there are 14 branches and 8 paths.
And in general if each path in n branches there are 2^n paths and and
2*(2^n1) branches, which is never quite twice the number of paths.
So that TO's TWICE as many is always wrong.
>
>> TO's mistake is to presume that what happens for finite trees must also
>> be the case for (infinite) maximal binary trees. It is a presumption
>> that can, in fact, be disproved, and has been disproved in these
threads.
>No it has not. Your various interpretations of the branches in the tree
>don't
>prove things about the structure of the tree itself, and your mishmoshed
>judgement of countability doesn't hold water for me anyway. Have you
ever
>actually USED a binary tree for anything? I mean, weren't you the one
that
>told
>me one can't insert a node in the middle of the tree, when computers do
it
>every single day?
TO gets it wrong, as usual. What I actually said was that a computer
cannot inset a node at an arbitrary poiition in a maximal binary tree,
the reason being that no computer with only a finite memory capacity can
deal with arbitrary positions in infinite trees.
Now a Turing machine acting on an infinite tape,...
====
Subject: Re: Cantor and the binary tree
NntpPostingHost: apps.cwi.nl
> If we find by induction that every even number 2n is larger than the
> cardinal number n of its set {2,3,6,...,2n} then we see for *every even
> number* that the cardinal number n of its set cannot surpass its value
> 2n. This proof holds, by induction, for every even number. But in the
> whole set of all even numbers there is nothing else but even numbers.
Yes, so what?
> So it is impossible that its cardinality is larger than any even
> number.
Why not? Provide a proof, please.
> Attention: This proof by induction uses only finite even
> numbers. It does not use the whole set of even numbers. But its result
> concern this set, because this set contains solely finite even numbers.
The proof is valid for all finite sets of even numbers. There is no reason
to suspect that it holds for the infinite set of all even numbers. The
major
difference is that each finite set of even numbers has a maximal element,
no such maximal element exists for the set of all finite numbers.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland,
+31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
====
Subject: The factoring problem
I've been reading a bit on public key cryptography and RSA,
and have to admit it's clever.
But I've been wondering, why is the factoring problem so hard?
Maybe I'm naive, or just dense, but intuitively it seems like
there should exist a straightforward, efficient method, for
something so essential.
Sorry for a possibly stupid question...

Rich
====
Subject: Re: The factoring problem
XRFC2646: Format=Flowed; Original
> I've been reading a bit on public key cryptography and RSA,
> and have to admit it's clever.
But I've been wondering, why is the factoring problem so hard?
Why is it hard? No one knows. For that matter, no else really knows *if*
it is hard. Our best evidence is lots of really bright people have
thought
hard about this problem, and while they've come up with a number of clever
ideas, we can still come up with factoring problems that appear to be
intractable (but not so large that generating them is intractable).
> Maybe I'm naive, or just dense, but intuitively it seems like
> there should exist a straightforward, efficient method, for
> something so essential.
Sorry for a possibly stupid question...

> Rich
>
====
Subject: Re: The factoring problem
>
>> I've been reading a bit on public key cryptography and RSA,
>> and have to admit it's clever.
>>
>> But I've been wondering, why is the factoring problem so hard?
>
> Why is it hard? No one knows. For that matter, no else really knows
*if*
> it is hard. Our best evidence is lots of really bright people have
thought
> hard about this problem, and while they've come up with a number of
clever
> ideas, we can still come up with factoring problems that appear to be
> intractable (but not so large that generating them is intractable).
>
And an onteresting point, there is still progress in factoring methods, so
not everything has been said on that subject.
====
Subject: Re: The factoring problem
Am 24.06.05 07:32 schrieb JeanClaude Arbaut:
>
>
>
>
>I've been reading a bit on public key cryptography and RSA,
>and have to admit it's clever.
>>But I've been wondering, why is the factoring problem so hard?
>>Why is it hard? No one knows. For that matter, no else really knows
*if*
>>it is hard. Our best evidence is lots of really bright people have
thought
>>hard about this problem, and while they've come up with a number of
clever
>>ideas, we can still come up with factoring problems that appear to be
>>intractable (but not so large that generating them is intractable).
>
>
> And an onteresting point, there is still progress in factoring methods,
so
> not everything has been said on that subject.
>
Well, assume an alien people have a native view of things in terms
of multiplication only,
Their original numbersystem would be for instance only of 2^k.
No problem of factoring here.
WIth the development of math they invent the operation of
addition, for the reason, that a bunch of three apples cannot be described
in terms of 2^k, of multiplication of bunches of 2 apples.
Again they have a native understanding of 3,3*3,3*3*3,... just 3^k,
as humans would say.
as well in 3^k, and bunches of apples based on multiplication with
the same factor never match if started from differents sizes.
Now, by investigation the new operation of addition they find many
such holes, and find the set of primenumbers.
But their intuitive way to see things is still the multuplication.
By their native view they invent a numbersystem in terms of
primepowers now:
[exp] [exp] [exp] ...
representing the exponents of
2 3 5 ...
All numbers they deal with are represented (and mentally thought) in
such a system, even pi is derived in terms of an infinite product.
They will have no problem in factoring numbers, what the heck
should be the problem to factor
[3] [5] [2] ?
But they have a hard math problem with addition! How changes
[3] [5] [2] by simply adding 1, an operation which is
already difficult on its own and much unfamiliar with that people...
They'll find it a deep mathematical problem, which is not fully
evaluated yet and it seems too hard for even the brightest mathematicians
of the world to formulate an efficient algorithm for determining the
changings in the digitrepresentations of a number caused by addition
Gottfried Helms
====
Subject: Re: The factoring problem
>
>
>
>
>I've been reading a bit on public key cryptography and RSA,
>and have to admit it's clever.
>>But I've been wondering, why is the factoring problem so hard?
>>Why is it hard? No one knows. For that matter, no else really knows
*if*
>>it is hard. Our best evidence is lots of really bright people have
thought
>>hard about this problem, and while they've come up with a number of
clever
>>ideas, we can still come up with factoring problems that appear to be
>>intractable (but not so large that generating them is intractable).
>
>
> And an onteresting point, there is still progress in factoring methods,
so
> not everything has been said on that subject.
>
Definitely. RSA has a factoring challenge:
http://www.rsasecurity.com/rsalabs/node.asp?id=2094
Some of the prizes are quite large, and as yet uncollected:
http://www.rsasecurity.com/rsalabs/node.asp?id=2093
====
Subject: Re: The factoring problem
I think it is basically because our number system can be derived from
set theory, which means it's all based on counting elements,
{}, #=0
{{}}, #=1
{ {{}}, {} } #=2
...
and that's why addition and subtract are so easy and twoway. But
multiplication is an algorithm invented by man. Even though it has
natural and common sense interpretation in the real world, it's just an
algorithm mathematically and so basically you're trying to reverse
engineer that algorithm.
Todd Smith
UCF Mathematics
www.exampleproblems.com (wiki) Math example problems and solutions
from all areas of graduate level mathematics.
====
Subject: Re: The factoring problem
> I think it is basically because our number system can be derived from
> set theory, which means it's all based on counting elements,
>
> {}, #=0
> {{}}, #=1
> { {{}}, {} } #=2
> ...
>
> and that's why addition and subtract are so easy and twoway. But
> multiplication is an algorithm invented by man.
Eh??? Set theory is by far a more artificial and bizarre construct
than integers. Kronecker said God made the natural numbers, and all
else is the work of man. Set theory is certainly right up there as a
candidate for what he was referring to, except I'm not sure if it was
invented then. You're probably aware that if you take a solid ball in
3space with unit radius, then according to set theory you can cut it
into 5 pieces and then reassemble those pieces into two solid balls
into two solid balls of unit radius. I just can't see such a theory
as being somehow more basic than multiplication.
====
Subject: Re: The factoring problem
>> I think it is basically because our number system can be derived from
>> set theory, which means it's all based on counting elements,
>>
>> {}, #=0
>> {{}}, #=1
>> { {{}}, {} } #=2
>> ...
>>
>> and that's why addition and subtract are so easy and twoway. But
>> multiplication is an algorithm invented by man.
>
> Eh??? Set theory is by far a more artificial and bizarre construct
> than integers. Kronecker said God made the natural numbers, and all
> else is the work of man. Set theory is certainly right up there as a
> candidate for what he was referring to, except I'm not sure if it was
> invented then.
Little quote to help understand why Kronecker said that:
However the topics he studied were restricted by the fact that he believed
in the reduction of all mathematics to arguments involving only the
integers
and a finite number of steps. Kronecker is well known for his remark:
God created the integers, all else is the work of man.
Kronecker believed that mathematics should deal only with finite numbers
and
with a finite number of operations. He was the first to doubt the
significance of nonconstructive existence proofs. It appears that, from
the
early 1870s, Kronecker was opposed to the use of irrational numbers, upper
and lower limits, and the BolzanoWeierstrass theorem, because of their
nonconstructive nature. Another consequence of his philosophy of
mathematics was that to Kronecker transcendental numbers could not exist.
More at
http://wwwgroups.dcs.stand.ac.uk/~history/Mathematicians/Kronecker.html
> You're probably aware that if you take a solid ball in
> 3space with unit radius, then according to set theory you can cut it
> into 5 pieces and then reassemble those pieces into two solid balls
> into two solid balls of unit radius.
BanachTarski paradox
> I just can't see such a theory
> as being somehow more basic than multiplication.
Addition ? ;)
====
Subject: Re: The factoring problem
>> I just can't see such a theory
>> as being somehow more basic than multiplication.
>
> Addition ? ;)
Short on careful reading ;) Actually, finite set theory is very intuitive.
Difficulties arise when you deal with infinite objects. Before set theory,
infinity was only *potential*, e.g. you can prove that a property is true
for every integer by induction.
====
Subject: Re: The factoring problem
> Short on careful reading ;) Actually, finite set theory is very
intuitive.
> Difficulties arise when you deal with infinite objects. Before set
theory,
> infinity was only *potential*, e.g. you can prove that a property is true
> for every integer by induction.
Sure, but there goes the usual way of constructing sqrt(2) from an
infinite sequence. You're back to just having integers again ;).
====
Subject: Re: The factoring problem
> I think it is basically because our number system can be derived from
> set theory, which means it's all based on counting elements,
>
> {}, #=0
> {{}}, #=1
> { {{}}, {} } #=2
> ...
>
> and that's why addition and subtract are so easy and twoway. But
> multiplication is an algorithm invented by man. Even though it has
> natural and common sense interpretation in the real world, it's just an
> algorithm mathematically and so basically you're trying to reverse
> engineer that algorithm.
>
>
That's an interesting point of view. You consider the fact that 15 = 3 *
5 to be an artifact of human thought, and not an intrinsic property of
the number 15?
====
Subject: Re: The factoring problem
> But I've been wondering, why is the factoring problem so hard?
> Maybe I'm naive, or just dense, but intuitively it seems like
> there should exist a straightforward, efficient method, for
> something so essential.
It's not known for certain that factoring is hard; it's just that
smart mathemeticians have tried for hundreds of years to find such a
method and nobody has succeeded. It might help to compare it with
polynomial multiplication. If you multiply two polynomials P and Q
from Z[x], then the product's coefficient in (say) x**37 comes from
adding up a particular bunch of terms that you can readily identify,
so factoring 1variable polynomials isn't so hard. With integer
multiplication, stuff in the 37th decimal place is affected not only
by digits like p(23) * q(14) (since 23+14=37), but by lower placed
digits as well, that have caused carries into the 37th place. All
those terms get added together and you can no longer tell where they
came from. So factoring is sort of like trying to unscramble an egg.
However, someone could in fact find a fast algorithm tomorrow. If you
get a bright idea and want to post it here, please include a worked
out example with a large composite, since some silly people come up
with algorithms that only work on small composites or don't work at
all, and then they post here and can't be persuaded that they haven't
really solved the problem ;).
====
Subject: Re: Why are there no math prodigy monkeys?
I think it's because to do math, I suppose any math besides addition
and subtraction, you have to think in a totally abstract way, and the
way to think about calculus couldn't be explained to a monkey. I bet if
you demonstrated all the rules of, say, moving constants out of
integrals and moving negative signs into, out of, and around the trig
functions, those motions could be reproduced with training.
Todd Smith
UCF Mathematics
www.ExampleProblems.com (wiki) Math example problems and solutions
from all areas of graduate level mathematics.
====
Subject: Re: Why are there no math prodigy monkeys?
>>On 22 Jun 2005 22:56:32 0700, double d >If
there are 3 year old human prodigies who can do calculus, why aren't
>>there prodigy monkeys
>>who can add, divide, and multiply?
>>How could you possibly know that there are no such monkeys?
>> Because if there were, he'd have hired them, yet there are
>> none on his payroll? (A pity, too; they'd work for peanuts.)
No. Monkeys are always too distracted by literary pursuits. Mark
>knows better than to hire that crowd.
Ah yes, they're always trying to type out the works of William
Shakespeare.
Robert Israel israel@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia Vancouver, BC, Canada
====
Subject: Re: mathematician salaries
<05vte.2$45.931@news.uchicago.edu>
I guess nobody doing a PhD
needs to pay anything for his study.
Spoken like a true idiot who can't count properly.
Mark now knows that you don't view 5 years of your life as
worth anything, do you? How much money could a math grad student
have earned if he had a full time job in the real world for those 5
years of grad school?
====
Subject: Re: mathematician salaries
<05vte.2$45.931@news.uchicago.edu>
int for Marky: You have
not said who these instructors
are or how they are related to new math PhDs.
Mark gave you the MIT link. Go luck up their vitas yourself, all
posted. Mark is not
your research assistant. That's 30 temp positions no matter how you
count it.
In your heart, you know Mark is right. That's what really counts in
research, isnt' it?
Supersedes:
XLastUpdated: 1999/08/06
====
Subject: Invariant Galilean Transformations (FAQ) On All Laws
Summary: All laws/equations are Galilean invariant when expressed
in the generalized cartesian coordinates demanded by basic
analytic geometry, vector algebra, and measurement theory.
Originator: faqserv@penguinlust.mit.edu
Disclaimer: approval for *.answers is based on form, not content.
Opponents of the content should first actually find out what
it is, then think, then request/submitto arbitration by the
appropriate neutral mathematics authorities. Flaming the hard
working, selfless, *.answers moderators evidences ignorance
and despicable netiquette.
ArchiveName: physicsfaq/criticism/galileaninvariance
Version: 0.04.03
Postingfrequency: 15 days
Invariant Galilean Transformations (FAQ) On All Laws
(c) Eleaticus/Oren C. Webster
Thnktank@concentric.net
An obvious typo or two corrected.
The Brittanica section revised to less
'pussyfooting' and to more directly
anticipate the elementary measurement
theory and basic analytic geometry
that is applied to the transformation
concept.

====
Subject: 1. Purpose
The purpose of this document is to provide the student of Physics,
especially Relativity and Electromagnetism, the most basic princ
iples and logic with which to evaluate the historic justification
of Relativity Theory as a necessary alternative to the classical
physics of Newton and Galileo.
We will prove that all laws are invariant under the Galilean
transformation, rather than some being noninvariant, after
we show you what that means.
We shall also show that another primal requirement that SR
exist is nonsense: MichelsonMorley and KennedyThorndike do
indeed fit Galilean (c+v) physics.

====
Subject: 2. Table of Contents
1. Foreword and Intent
2. Table of Contents
3. The Principle of Relativity
4. The Encyclopedia Brittanica Incompetency.
5. Transformations on Generalized Coordinate Laws
6. The data scale degradation absurdity.
7. The Crackpots' Version of the Transforms.
8. What does sci.math have to say about x0'=x0vt?
9. But Doesn't x.c'=x.c?
10. But Isn't (x'x.c')=(xx.c) Actually Two Transformations?
11. But Doesn't (x'x.c+vt) Prove The Transformation Time
Dependent?
12. But Isn't (x'x.c')=(xx.c) a Tautology?
13. But Isn't (x'x.c')=(xx.c) Almost the Definition of
a Linear Transform?
14. But The Transform Won't Work On Time Dependent Equations?
15. But The Transform Won't Work On Wave Equations?
16. But Maxwell's Equations Aren't Galilean Invariant?
17. First and Second Derivative differential equations.

====
Subject: 3. The Principle of Relativity and Transformation
If a law is different over there than it is here,
it is not one law, but at least two, and leaves us
in doubt about any third location. This is the
Principle of Relativity: a natural law must be the
same relative to any location at which a given event
may be perceived or measured, and whether or not the
observer is moving.
The idea of location translates to a coordinate
system, largely because any object in motion could
be considered as having a coordinate system origin
moving with it. If you perceive me moving relative
to you  who have your own coordinate system  will
your measurements of my position and velocity fit
the same laws my own, different measurements fit?
If a law has the same form in both cases it is
called covariant. If it is identical in form, var
ables, and output values, it is called invariant.
What we're asking is that if the xcoordinate, x,
on one coordinate axis works in an equation, does
the coordinate, x', on some other, parallel axis
work? Speaking in terms of the axis on which x is
the coordinate, x' is the 'transformed' coordinate.
The situation is complicated because we're talking
about coordinates  locations  but in most mean
ingful laws/equations, it is lengths/distances (and
time intervals) the equations are about, and x coord
inates that represent good, ratio scale measures of
distances are only interval scale measures on the x'
axis. [See Table of Contents for discussion of scales.]
So, if we have an xcoordinate in one system, then
we can call the x' value that corresponds to the same
point/location the transform of x.
In particular, the Principle of Relativity is embodied
in the form of the Galilean transformation, which
relates the original x, y, z, t to x', y', z', t' by
the transform equations x'=xvt, y'=y, z'=z, t'=t in
the simplified case where attention is focused only
on transforming the xaxis, and not y and z. In the
case of Special Relativity, the x' transform is the
same except that x' is then divided by sqrt(1(v/c)^2),
and t'=(txv/cc)/sqrt(1(v/c)^2). In either case, v
is the relative velocity of the coordinate systems;
if there is already a v in the equations being trans
formed use u or some other variable name.

====
Subject: 4. The Encyclopedia Brittanica Incompetency.
One example of the traditional fallacious idea
that an equation is not invariant under the galilean
transformation comes from the Encyclopedia Brittanica:
Before Einstein's special theory of relativity
was published in 1905, it was usually assumed
that the time coordinates measured in all inertial
frames were identical and equal to an 'absolute
time'. Thus,
t = t'. (97)
The position coordinates x and x' were then
assumed to be related by
x' = x  vt. (98)
The two formulas (97) and (98) are called a
Galilean transformation. The laws of nonrelativ
istic mechanics take the same form in all frames
related by Galilean transformations. This is the
restricted, or Galilean, principle of relativity.
The position of a light wave front speeding from
the origin at time zero should satisfy
x^2  (ct)^2 = 0 (99)
in the frame (t,x) and
(x')^2  (ct')^2 = 0 (100)
in the frame (t',x'). Formula (100) does not
transform into formula (99) using the transform
ations (97) and (98), however.
.................................................
Besides the trivially correct statement of what the
Galilean 'transform' equations are, there is exactly
one thing they got right.
I. Eq100 is indeed the correct basis for discussing
the question of invariance, given that eq99 is
the correct 'stationary' (observer S) equation.
[Let observer M be the 'moving'system observer.]
In particular, eq100 is of exactly the same
form [the square of argument one minus the square
of argument two equals zero (argument three).]
II. It is nonsense to say eq99 should be derivable from
eq100; for one thing, the transforms are TO x' and
t' from x and t, not the other way around, and the
idea that either observer's equation should contain
within itself the terms to simplify or rearrange to
get to the other is ridiculous. As the transform
equations say, the relationship of t', x' to t, x
is based on the relative velocity between the two
systems, but neither the original (eq99) equation
nor the M observer equation is about a relationship
between coordinate systems or observers. One might
as well expect the two equations to contain banana
export/import data; there is no relevancy. The
'transform' equations are the relationships between
x' and x, t' and t and have nothing to do with what
one equation or the other ought to 'say'. The
equations' content is the rate at which light emitted
along the xaxes moves.
III. Most remarkable, the True Believer SR crackpots who
most despise the consequences of measurement theory
(demonstrable fact) contained in this document are
those who want to argue against our saying the Britt
anica got eq100 right;
They insist that the correct equation is derived
directly from x'=xvt and t'=t. Solve for x=x'+vt
and replace t with t', then substitute the result
in eq99: (x'+vt')^2  (ct')^2 = 0.
Besides the fact that this results in an equation
with arguments exactly equal to eq99, they will
insist the transform is not invariant.
IV. A major justification they have for their idea of
the correct M system equation on which to base the
the discussion of invariance, is that the variables
are M system variables, never mind the fact that
the arguments are S system values.
That argument of theirs is arrant nonsense. The
velocity v that S sees for the M system relative
to herself is the negative of what the M system
sees for the S system relative to himself.
In other words, x'+vt' is a mixed frame expression
and it is x'+(v)t' that would be strictly M frame
notation, and that equation is far off base. [Work
it out for yourself, but make sure you try out an
S frame negative v so as not to mislead yourself.]
V. In I. we said: given that eq99 is the correct
'stationary' equation. Let's look at it closely:
x^2  (ct)^2 = 0 (99)
This whole matter is supposed to be about coordinate
transforms. Is that what t is, just a coordinate?
No. It isn't, in general. Suppose you and I are both modelling
the same light event and you are using EST and I'm using PST.
'Just a time coordinate' is just a clock reading amd your t clock
reading says the light has been moving three hours longer
than my clock reading says. Well, that's what the idea that
t is a coordinate means.
Eq99 works if and only if t is a time interval, and in
particular the elapsed time since the light was emitted.
Thus, that equation works only if we understand just
what t is, an elapsed time, with emissioon at t=0.
However, we don't have to 'understand' anything if we use
a more intelligent and insightful form of the equation:
(x)^2  [ c(tt.e) ]^2 = 0,
where t.e is anyone's clock reading at the time of light
emission, and t is any subsequent time on the same clock.
Similarly, x is not just a coordinate, but a distance
since emission.
(xx.e)^2  [ c(tt.e) ]^2 = 0 (99a)
VI. In the spirit of 'there is exactly one thing
they got right', the correct M system version
of eq99a is eq100a:
(x'x.e')^2  [ c(t't.e') ]^2 = 0 (100a)
Every observer in the universe can derive their
eq100a from eq99a and vice versa, not to mention to and
from every other observer's eq99a.
Now, THAT's invariance. [You do realize that every
eq100a reduces to eq99a, when you back substitute
from the transforms, right? t.e'=t.e, x.e'=x.evt.]

====
Subject: 5. Transformations on Generalized Coordinate Laws
The traditional Gallilean transform is correct:
t' = t
x' = x  vt.
But remember this: a transform of x doesn't effect
just some values of x, but all of them, whether they
are in the formula or not. This is important if you
want to do things right. The crackpot position is
strongly against this sci.math verified position, and
the apparently standard coordinate pseudotransformation
they suggest is perhaps the result. {See Table of
Contents.]
Let's use a simple equation: x^2 + y^2 = r^2, which is
the formula for a circle with radius r, centered at a
location where x=0.
But what if the circle center isn't at x=0? Well, we'd
want to use the form analytic geometry, vector algebra,
and elementary measurement theory tells us to use, a form
where we make explicit just where the circle center is,
even if it is at x=x0=0:
(xx0)^2 + (yy0)^2 = r^2.
The circle center coordinate, x0, is an xaxis coordinate,
just like all the xvalues of points on the circle.
So, in proper generalized cartesian coordinate forms
of laws/equations we want to transform every occurence
of x and x0  by whatever name we call it: x.c, x_e,
whatever.
So, what is the transformed version of (xx0)? Why,
(x'x0'); both x and x0 are xcoordinates, and every
xcoordinate has a new value on the new axis.
So, what is the value of (x'x0') in terms of the original
x data?
is also true for x0'=x0vt:
(x'x0')=[ (xvt)(x0vt) ]=(xx0).
In other words, when we use the generalized coordinate form
specified by analytic geometry, we find that the value of
(x'x0') does not depend on either time or velocity in any
way, shape, form, or fashion.
Similarly for (yy0).
We can treat time the same way if necessary: (tt0).
The above is a proof that any equation in x,y,z,t is
invariant under the galilean transforms. Just use the
generalized coordinate form, with (xx0)/etc, in the
transformation process, not the incompetently selected
privileged form, with just x/etc.
[The form is privileged because it assumes the circle
center, point of emission, whatever, is at the origin of
the axes instead at some less convenient point. After
transform the coordinate(s) of the circle center/origin
are also changed but the privileged form doesn't make
this explicit and screws up the calculations, which
should be based on (x'x0') but are calculated as (x'0).]
The value of (x'x0') is the same as (xx0). That makes
sense.
Draw a circle on a piece of paper, maybe to the right
side of the paper. On a transparent sheet, draw x and y
coordinate axes, plus x to the right, plus y at the top.
Place this axis sheet so the yaxis is at the left side
of the circle sheet.
Now answer two questions after noting the xcoordinate of
the circle center and then moving the axis sheet to the right:
(a) did the circle change in any way because you moved
the axis sheet (ie because you transformed the coordin
nate axis)?
(b) did the coordinate of the circle center change?
The circle didn't change [although SR will say it did];
that means that (x'x0') does indeed equal (xx0).
The coordinate of the circle center did change, and it
changed at the same rate (vt) as did every point on
the circle. That means that x0'<>x0, and the fact the
circle center didn't change wrt the circle, means that
the relationship of x0' with x0 is the same as that of
any x' on the circle with the corresponding x: x'=xvt;
x0'=x0vt.
This is to prepare you for the True Believer crackpots that
say 'constant' coordinates can't be transformed; some even
say they aren't coordinates. These crackpots include some
that brag about how they were childhood geniuses, btw.
QED: The galilean transformation for any law on
generalized Cartesian coordinates is invariant under
the Galilean transform.
The use of the privileged form explains HOW the transformed
equation can be messed up, the next Subject explains what
the screwed up effect of the transform is, and how use
of the generalized form corrects the screwup.

====
Subject: 6. The data scale degradation absurdity.
The SR transforms and the Galilean transforms both
convert good, ratio scale data to inferior interval
scale data. The effect is corrected, allowed for,
when the transforms are conducted on the generalized
coordinate forms specified by analytic geometry and
vector algebra.
Both sets of transforms are 'translations'  lateral
movements of an axis, increasing over time in these
cases  but with the SR transform also involving a
rescaling. It is the translation term, vt in the x
transform to x', and xv/cc in the t transform to t',
that degrades the ratio scale data to interval scale
data. In general, rescaling does not effect scale
quality in the sizeofunits sense we have here.
SR likes to consider its transforms just rotations,
however  in spite of the fact Einstein correctly said
they were 'translations' (movements)  and in the case
of 'good' rotations, ratio scale data quality is indeed
preserved, but SR violates the conditions of good ro
tations; they are not rigid rotations and they don't
appropriately rescale all the axes that must be rescaled
to preserve compatibility.
The proof is in the pudding, and the pudding is the
combination of simple tests of the transformations.
We can tell if the transformed data are ratio scale
or interval.
Ratio scale data are like absolute Kelvin. A measure
ment of zero means there is zero quantity of the
stuff being measured. Ratio scale data support add
ition, subtraction, multiplication, and division.
The test of a ratio scale is that if one measure
looks like twice as much as another, the stuff
being measured is actually twice as much. With
absolute Kelvin, 100 degrees really is twice the
heat as 50 degrees. 200 degrees really is twice
as much as 100.
Interval scale data are like relative Celsius, which
is why your science teacher wouldn't let you use it
in gas law problems. There is only one mathematical
operation interval scales support, and that has to
be between two measures on the same scale: subtraction.
100 degrees relative (household) Celsius is not twice
as much as 50; we have to convert the data to absolute
Kelvin to tell us what the real ratio of temperatures
is.
However, whether we use absolute Kelvin or relative
Celsius, the difference in the two temperature readings
is the same: 50 degrees.
Thus, if we know the real quantities of the 'stuff'
being measured, we can tell if two measures are on
a ratio scale by seeing if the ratio of the two
measures is the same as the ratio of the known quant
ities.
If a scale passes the ratio test, the interval scale test
is automatically a pass.
If the scale fails the ratio test, the interval scale
test becomes the next in line.
It isn't just the bare differences on an interval
scale that provides the test, however. Differences
in two interval scale measures are ratio scale, so
it is ratios of two differences that tell the tale.
Let's do some testing, and remember as we do that our
concern is for whether or not the data are messed up,
not with 'reasons', excuses, or avoidance.

Are we going to take a transformed length (difference)
and see whether that length fits ratio or interval scale
definitions?
Of course, not. Interval scale data are ratio after
one measure is subtracted from another. That is the
major reason the SR transforms can be used in science.
Let there be three rods, A, B, C, of length 10, 20, 40,
respectively. These lengths are on a known ratio scale,
our original xaxis, with one end of each rod at the
origin, where x=0, and the other end at the coordinate
that tells us the correct lengths.
Note that these xvalues are ratio scale only because
one end of each rod is at x=0. That may remind you of
the correct way to use a ruler or yard/meterstick:
put the zero end at one end of the thing you are
measuring. Put the 1.00 mark there instead of the zero,
and you have interval scale measures.
Let A,B,C, be 10, 20, 40.
Let a,b,c be x' at v=.5, t=10.
x'=xvt.
A B C a b c
 
10 20 40 5 15 35
 
B/A = 2 b/a = 3
C/A = 4 c/a = 7
C/B = 2 c/b = 2.333
Obviously, the transformed
values are no longer ratio
scale. The effect is less on
the greater values.
CA = 10 ba = 10
CA = 30 ca = 30
CB = 20 cb = 20
Obviously, the transformed
values are now interval scale.
This will hold true for any
value of time or velocity.
(CA)/(BA) = 3 (ca)/(ba) = 3
(CB)/(BA) = 2 (cb)/(ba) = 2
Obviously, the ratios of the
differences are ratio scale,
being identical to the ratios
of the corresponding original
 ratio scale  differences.
The main difference between these results and the SR
results is that the differences do not correspond so
neatly to the original, ratio scale, differences.
This is due only to the rescaling by 1/sqrt(1(v/c)^2).
The ratios of the differences on the transformed values
do correspond neatly and exactly to the ratio scale
results.
Using the generalized coordinate form, such as (xx0),
the transform produces an interval scale x' and an
interval scale x0'. That gives us a ratio scale (x'x0'),
just like  and equal to  (xx0).

====
Subject: 7. The Crackpots' Version of the Transforms.
It has become apparent  whether misleading or not 
that the crackpot responses to the obvious derive from
a common source, whether it be bandwagoning or their
SR instructors.
Below, in the sci.math subject, we see that all sci.math
respondents agree with the basic controversial position
of this faq: every coordinate is transformed, whether a
supposed constant or not.
Think about it, the generalized coordinate of a circle
center, x0, applies to infinities upon infinities of
circle locations (given y and z, too); it is a constant
only for a given circle, and even then only on a given
coordinate axis.
And even variables are often held 'constant' during
either integration or differentiation.
The utility of a variable is that you can discuss all
possible particular values without having to single out
just one. That utility does not make particular  singled
out  values on the variable's axis not values of the
variable just because they have become named values.
In any case, all that is preamble to the incompetent idea
they have proposed for a transform of coordinates. It is
based on the idea that the circle center, point of emission,
whatever, has coordinates that cannot be transformed.
Let there be an equation, say (x)^2  (ict)^2 = 0.
What is the transformed version of that equation?
Answer: (x')^2  (ict')^2 = 0. That's the one thing the
Brittanica got right. Note that the leading crackpot just
criticized this faq for presuming to correct the Britt
anica, but it then and before poses the incompetent pseudo
transform we analyze here in this section.
x to x' and t to t' are obviously coordinate transforms;
the x and t coordinates have been replaced by the coord
inates in the primed system.
A tranform of an equation from one coordinate system to
another is NOT a substitution of the/a definition of x
for itself; that is not a coordinate transformation.
The most that can said for such a substitution is that
it is a change of variable.
But the crackpots are calling this a coordinate trans
form of the original equation:
(x'+vt)^2  (ict')^2 = 0.
It is not a coordinate transform, of course, except
accidentally. (x'+vt) is not the primed system
coordinate, it is another form/expression of x. They
get that substitution by solving x'=xvt for x; x=x'+vt.
So, by incompetent misnomer, they accomplish what they
have been railing against all along.
It has been the generalized coordinate form in question all
this time:
(xx0)^2  (ict)^2 = 0.
Here they substitute for x instead of transforming to the
primed frame:
(x'+vtx0)^2  (ict')^2.

^

^

It is still x ^ but see what they have accomplished
by their mis/malfeasance:
[x'+vtx0]=[x'+(vtx0)]=[x'(x0vt)].
=[x'x0']
The crackpots have been bragging about how you don't
have to transform the circle center's coordinate to
transform the circle center's coordinate. Bragging
that what they were doing was not what they said
they were doing.
This does give us insight as to some of the crackpot
variations on their x0'<>x0vt theme, which in all the
variations will be discussed in later sections..
They are used to seeing the mixed coordinate form,
(x'+vtx0) without realizing what it respresented,
so  accompanied with a lack of understanding of
the term 'dependent'  they are used to seeing just
the one vt term, and not the one hidden in the defi
nition of x' and are used to imagining it makes the
whole expression time dependent and thus not invariant.
About which, let x=10, let, x0=20, v=10, and t
variously 10 and 23:
(xx0)=10. Using their (x'+vtx0):
For t=10, we have (x'+vtx0) = [ (1010*10) + (10*10)  (20) ]
= 90 + 100  20
= 10
= (xx0)
For t=23, we have (x'+vtx0) = [ (1010*23) + (10*23)  (20) ]
= 220 + 230  20
= 10
= (xx0)
The result depends in no way on the value of time;
we showed the obvious for a couple of instances of t
just so you can see that the crackpots not only do
not understand the obvious logic of the algebra
{ (x'x0')=[ (vt)(x0vt) ]=(xx0) }  which shows
that the transform has no possible time term effect 
but they don't understand even a simple arithmetic
demonstration of the facts.
Oh. Their (x'+vtx0) or (x'+vt'x0) reduces the same
way since t'=t:
(xvt+vtx0)=(xx0).
Their process, which says (x'+vt') is the transform
of x, says that (x'+vt') is the moving system location
of x, but it can't be because x is moving further in
the negative direction from the moving viewpoint.
That formula will only work out with v<0 which is indeed
the velocity the primed system sees the other moving at.
However, that formula cannot be derived from x'=xvt,
the formula for transformation of the coordinates from
the unprimed to the primed,

====
Subject: 8. What does sci.math have to say about x0'=x0vt?
The crackpots' positions/arguments were put to sci.math
in such a way that at least two or three who posted re
sponses thought it was your faqer who was on the idiot's
side of the questions.
Their responses:

I. x0' = x0. In other words: x0' <> x0vt, or constant
values on the xaxis are not subject to the transform.
No. x0' = x0  vt.
Well, if you want, you could define constant values on the xaxis,
but
in the context of the question that is not relevant. The relevant fact is
that if the unprimed observer holds an object at point x0, then the
primed observer assigns to that object a coordinate x0' which is
numerically related to x0 by x0'= x0 vt.
What does this mean? The line x=x0 will give x'=xv*t=x0vt', so if x0'
is to give the coordinate in the (x',t',)system, it will be given by
x0'=x0v*t': ie., it is not given by a constant. Thus, being at rest
(constant xcoordinate) is a coordinatedependent concept.
Sounds very false. We can say that the representation of the point X0 is
the number x0 in the unprimed system, and x0' in the primed system.
Clearly x0 and x0' are different, if vt is not zero. However one may say
that (though it sounds/is stupid) the point X0 itself is the same
throughout the transformation. However that expression sounds
meaningless, since a transform (ok, maybe we should call it a change of
basis) is only a function that takes the point's representation in one
system into the same point's representation in another system. It is
preferrable to use three notations: X0 for the point itself and x0 and
x0' for the points' representations in some coordinate systems.

====
Subject: 9. But Doesn't x.c'=x.c?
That idea is one of the most idiotic to come up, and it does
so frequently. And in a number of guises.
The idea being that x.c' <> x.cvt, with x.c being what
we have called x0 above; the notation makes no difference.
Some crackpots have managed to maintain that position even
after graphs have illustrated that such an idea means that
after a while a circle center represented by x.c' could be
outside the circle.
The leading crackpot just make that explicit, as far as
one can tell from his befuddled post in response to a line
about active transforms, which are actually moving body
situations, not coordinate transformations:

e>An active transform is not a coordinate transform, ...
Right, it is a transform of the center (in the opposite direction)
done to effect the change of coordinates without a coordinate
transform. ...
E: Transform of the center? Center of a circle?
He really is saying a circle center moves in
the opposite direction of the circle! Right?

If r=10 and x.c was at x.c=0, then the points on the circle
(10,0), (10,0), (0,10) and (0,10) could at some time become
(10,0), (30,0), (20,10), and (20,10), but with x.c'=x.c,
the circle center would be at (0,0) still! The circle is here
but its center is way, way over there! Indeed, although a change
of coordinate systems is not movement of any object described in
the coordinates, the x.c'=x.c crackpottery is tantamount to the
circle staying put but the center moving away. Or vice versa.

====
Subject: 10. But Isn't (x'x.c')=(xx.c) Actually Two Transformations?
One crackpot puts the (x'x.c')=(xvt  x.c+vt) relationship
like this:
(xvt+vt  x.c).
See, he says, that is transforming x (with xvt  x.c) and then
reversing the transform (xvt+vt  x.c).
That's just another crackpot form of the idiocy that
x.c' <> x.cvt. You'll have noticed the implication
is that there is no transform vt term relating to x.c.

====
Subject: 11. But Doesn't (x'x.c+vt) Prove The Transformation
Time Dependent?
That particular crackpottery is perhaps more corrupt than
moronic, since it includes deliberately hiding a vt term from
view, and pretending it isn't there. [However, we have seen
above that it is a familiar incompetency, and not likely an
original.]
Look, the crackpots say, there is a time term in the
transformed (x'  x.c+vt). The transform isn't invariant!
It's time dependent!
Just put x' in its original axis form, also, which reveals
the other time term, the one they hide:
(x'x.c+vt) = (xvt  x.c+vt) = (xx.c).
So, at any and all times, the transform reduces to the
original expression, with no time term on which to be
dependent.
Then there is the fact that if you leave the equation
in any of the various notation forms  with or without
reducing them algebraicly  the arithmetic always comes
down to the same as (xx.c). That means nothing to crack
pots, but may mean something to you.

====
Subject: 12. But Isn't (x'x.c')=(xx.c) a Tautology?
My dictionary relates 'tautology' to needless repetition.
That's another form of the x.c' <> x.cvt idiocy.
The repetition involved is the vt transformation term.
Apply the vt term to the x term, and it is needless
repetition to apply it anywhere again? The 'again' is
to the x.c term. The x.c' = x.c crackpot idiocy.
The repetition of the vt terms is required by the presence
of two x values to be transformed.
Be sure to note the next section.

====
Subject: 13. But Isn't (x'x.c')=(xx.c) Almost the Definition of
a Linear Transform?
Now, how on earth can we relate a tautology to a basic
definition in math?
we get this definition:

A linear transformation, A, on the space is a method of corr
esponding to each vector of the space another vector of the
space such that for any vectors U and V, and any scalars
a and b,
A(aU+bV) = aAU + bAV.

Let points on the sphere satisfy the vector X={x,y,z,1},
and the circle center satisfy C={x.c,y.c,z.c,1}. Let a=1,
and b=1.
Let A= ( 1 0 0 ut )
( 0 1 0 vt )
( 0 0 1 wt )
( 0 0 0 1 )
A(aX+bC) = aAX + bAC.
aX+bC = (xx.c, yy.c, zz.c, 0 ).
The left hand side:
A( x  x.c , y  y.c, z  z.c, 0 )
= ( xx.c , yy.c, zz.c, 0 ).
The right hand side:
aAX= ( xut, yvt, zwt, 1 ).
bAC= (x.c+ut, y.c+vt, z.c+wt, 1 ).
and
aAX+bAC = ( xx.c, yy.c, zz.c, 0 ).
Need it be said?
Sure: QED. On the galilean transform the
definition of a linear transform,
A(aU+bV)=aAU + bAV,
is completely satisfied.
The generalized form transforms exactly and
nonredundantly  with ONE TRANSFORM, not a
transform and reverse transform  and non
tautologically, just as the very definition
of a linear transform says it should.
And does so with absolute invariance, with this
galilean transformation.

====
Subject: 14. But The Transform Won't Work On Time Dependent Equations?
The main crackpot that has asserted such a thing was referring
to equations such as in Subject 4, above. The Light Sphere
equation; for which we have shown repeatedly elsewhere that the
numerical calculations are identical for any primed values as
for the unprimed values.
The presence  before transformation  of a velocity term
seems to confuse the crackpots. It turns out there is ex
treme historical reason for this, as you will see in the
subject on Maxwell's equations.

====
Subject: 15. But The Transform Won't Work On Wave Equations?
See Subject 17, below, for a discussion of Second Derivative
forms and the galilean transforms.

====
Subject: 16. But Maxwell's Equations Aren't Galilean Invariant?
Oh? Just what is the magical term in them that prevents
(x'x.c')=(xvt  x.c+vt)=(xx.c) from holding true?
It turns out not to be magic, but reality, that interferes
with the application of the galilean transforms to the gen
eralized coordinate form(s) of Maxwell: there are no coordi
nates to transform!
When True Believer crackpots are shown the simple
demonstration that the galilean transform on
generalized cartesian coordinates is invariant,
their first defense is usually an incredibly stupid
x0'=x0, because the coordinate of a circle center,
or point of emission, etc, is a constant and can't
be transformed.
The last defense is but Maxwell's equations are not
invariant under that coordinate transform. When
asked just what magic occurs in Maxwell that would
prevent the simple algebra
(x'x0')=[ (xvt)(x0vt) ]=(xx0)
from working, and when asked them for a demonstration,
they will never do so, however many hundreds of
times their defense is asserted.
The reason may help you understand part of Einstein's
1905 paper in which he gave us his absurd Special
Relativity derivation:
THERE ARE NO COORDINATES IN THE EQUATIONS TO BE TRANSFORMED.
Einstein gave the electric force vector as E=(X,Y,Z)
and the magnetic force vector as B=(L,M,N), where the
force components in the direction of the x axis are
X and L, Y and M are in the y direction, Z and N in
the z direction.
Those values are not, however, coordinates, but values
very much like acceleration values.
BTW, the current fad is that E and B are 'fields', having
been 'force fields' for a while, after being 'forces'.
So, when Einstein says he is applying his coordinate
transforms to the Maxwell form he presented, he is
either delusive or lying.
(a) there are no coordinates in the transform equations
he gives us for the Maxwell transforms, where
B=beta=1/sqrt(1(v/c)^2):
X'=X. L'=L.
Y'=B(Y(v/c)N). M'=B(M+(v/c)Z).
Z'=B(Z+(v/c)M). N'=B(N(v/c)Y).
X is in the same direction as x, but is not a coordinate.
Ditto for L. They are not locations, coordinates on the
xaxis, but force magnitudes in that direction.
Similarly for Y and M and y, Z and N and z.
(b) the v of the coordinate transforms is in Maxwell
before any transform is imposed; Einstein's transform
v is the velocity of a coordinate axis, not the velocity
he touched it.
(c) if they were honest Einsteinian transforms, they'd be
x, which means it is X and L that are supposed to be
transformed, not Y and M, and Z and N. And when SR does
transform more than one axis, each axis has its own
velocity term; using the v along the xaxis as the v
for a yaxis and zaxis transform is thus trebly absurd:
the axes perpendicular to the motion are not changed
according to SR, the v used is not their v, and the v
is not a transform velocity anyway.
(d) as everyone knows, the effect of E and B are on the
direction. Both the speed and direction are changed
by E and B, but v  the speed  is a constant in SR.
As absurd as are the previously demonstrated Einsteinian
blunders, this one transcends error and is an incredible
example of True Believer delusion propagating over decades.
The components of E and B do differ from point to point,
and in the variations that are not coordinate free,
they are subject to the usual invariant galilean trans
formation when put in the generalized coordinate form.

The SR crackpots don't know what coordinates are. The
various things they call coordinates include coordin
nates, but also include a variety of other quantities.

1. One may express coordinates in a oneaxisatatime
manner [like x^2+y^2=r^2] but it is the use of vector
notation that shows us what is going on. In vector
notation the triplet x,y,z [or x1,x2,x3, whatever]
represents the three spatial coordinates, but there
are socalled basis vectors that underlie them. Those
may be called i,j,k. Thus, what we normally treat as
x,y,z is a set of three numbers TIMES a basis vector
each.
2. These e*i, f*j, g*k products can have a lot of meanings.
If e, f, j are distances from the origin of i,j,k then
e*i, f*j, g*k are coordinates: distances in the directions
of i,j,k respectively, from their origin. That makes the
triplet a coordinate vector that we describe as being an
x,y,z triplet; perhaps X=(x,y,z).
The e*i, f*j, g*k products could be directions; take any
of the other vectors described above or below and divide the
e,f,g numbers by the length of the vector [sqrt(e^2+f^2+g^2)].
That gives us a vector of length=1.0, the e,f,g values of
which show us the direction of the original vector. That
makes the triplet a direction vector that we describe as
being an x,y,z triplet; perhaps D=(x,y,z).
The e*i, f*j, g*k products could be velocities; take any
of the unit direction vectors described above and multiply
by a given speed, perhaps v. That gives a vector of length
v in the direction specified. That makes the triplet a
velocity vector that we describe as being an x,y,z triplet;
perhaps V=(x,y,z). Each of the three values, e,f,g, is the
velocity in the direction of i,j,k respectively.
The e*i, f*j, g*k products could be accelerations; take any
of the unit direction vectors described above and multiply
by a given acceleration, perhaps a. That gives a vector of
length a in the direction specified. That makes the triplet
an acceleration vector that we describe as being an x,y,z
triplet; perhaps A=(x,y,z). Each of the three values, e,f,g,
is the acceleration in the direction of i,j,k respectively.
The e*i, f*j, g*k products could be forces (much like accel
erations); take any of the unit direction vectors described
above and multiply by a given force, perhaps E or B. That
gives a vector of length E or B in the direction specified.
That makes the triplet a force vector that we describe as
being an x,y,z triplet; perhaps E=(x,y,z) or B=(x,y,z). Each
of the three values, e,f,g, is the force in the direction of
i,j,k respectively.
Einstein's  and Maxwell's  E and B are
not coordinate vectors.
There is another variety of intellectual befuddlement that
misinforms the idea that Maxwell isn't invariant under the
galilean transform: confusions about velocities.
Velocities With Respect to Coordinate Systems.

Aaron Bergman supplied the background in a post to a sci.physics.*
newsgroup:
Imagine two wires next to each other with a current I in each.
Now, according to simple E&M, each current generates a magnetic
field and this causes either a repulsion or attraction between
the wires due to the interaction of the magnetic field and the
current. Let's just use the case where the currents are parallel.
Now, suppose you are running at the speed of the current between
the wires. If you simply use a galilean transform, each wire,
having an equal number of protons and electrons is neutral. So,
in this frame, there is no force between the wires. But this is a
contradiction.
First of all, the invariance of the galilean transform (x'x.c')
=(xx.c), insures that it is an error to imagine there is any
difference between the data and law in one frame and in another;
the usual, convenient rest frame is the best frame and only frame
required for universal analysis. [Well, (x'<>x, x,c'<>x.c, but
(x'x.c')=(xx.c).]
Second, given that you decide unnecessarily to adapt a law to
a moving frame, don't confuse coordinate systems with meaningful
physical objects, like the velocity relative to a coordinate
system instead of relative to a physical body or field.
In other words, what does current velocity with respect to a
coordinate system have to do with physics?
Nothing. Certainly not anything in the example Bergman gave.
What is relevant is not current velocity with respect to a
coordinate system, but current velocity with respect to wires
and/or a medium. The velocity of an imaginary coordinate sys
tem has absolutely nothing to do with meaningful physical vel
ocity. You can  if you are insightful enough and don't violate
item (e)  identify a coordinate system and a relevant physical
object, but where some v term in the pretransformed law is
in use, don't confuse it with the velocity of the coordinate
transform.
Velocities With Respect to ... What?

Albert Einstein opened his 1905 paper on Special Relativity
with this ancient incompetency:
The equations of the day had a velocity term that was taken
as meaning that moving a magnet near a conductor would create
a current in the conductor, but moving a conductor near a
wire would not. This was belied by fact, of course.
The important velocity quantity is the velocity of the
magnet and conductor with respect to each other, not to
some absolute coordinate frame (as far as we know) and
not to an arbitrary coordinate system.
One possible cause was the idea: but the equation says the magnet
must be moving wrt the coordinate system or ... the absolute
rest frame.
There not being anything in the equation(s) to say either of
those, it is amazing that folk will still insist the velocity
term has nothing to do with velocity of the two bodies wrt
each other.


====
Subject: 17. First and Second Derivative differential equations.
One of the intellectually corrupt ways of
denying the very simple demonstration of
galilean invariance of all laws expressed
in the generalized coordinate form demanded
by analytic geometry, vector analysis, and
measurement theory
[ (x'x.c')=[ (xvt)(x.cvt) ]=(xx.c) ]
is the assertion that those equations 'over there'
(usually Maxwell or wave) are somehow immune to
the elementary laws of algebra used to demon
strate the invariance. [Unfortunately, the
assertions are never accompanied by reference
to the magical math that makes elementary al
gebra invalid. Wonder why that is?]
Part of the time it is based on the old lore
based on the incompetent transformation of
the privileged form of an equation instead
of the correct form. [Evidence of this is
any reference to an effect due to the velocity
of the transform; it falls out algebraicly
 as you see above  and cancels out arith
metically  as you can see above.]
But usually it is just whistling in the dark,
waving the cross (zwastika, I'd say) at
the mean old vampire.
The most general equation that could be conjured
up is a differential with either First or Second
Derivatives.
Let's examine the plausibility of such magical
magical, noninvariance assertions.
(a) to get a Second Derivative you must have
a First Derivative.
(b) to get a First Derivative you must have
a function to differentiate.
(c) to get a Second Derivative you must have
a function in the second degree.
So, let us examine the question as to whether
any such common Maxwell/wave equation will
differ for
(a) the common, privileged form, represented
as ax^2, with a being an unknown constant
function.
(b) the generalized cartesian form, represented
as a(xx.c)^2 = ax^2 2ax(x.c) + ax.c^2,
with a being an unknown constant function.
(c) the transformed generalized cartesian form,
represented as a(xvt x.c+vt)^2, same as for
(b), = ax^2 2ax(x.c) + ax.c^2, of course,
with a being an unknown constant function.
I. for (a), remembering that x.c is a constant,
and that this version is only correct because
x.c=0, otherwise (b) is the correct form:
d/dx ax^2 = 2ax
(d/dx)^2 ax^2 = 2a
II. for (b), remembering that x.c is a constant.
d/dx (ax^2 2ax(x.c) + ax.c^2) = 2ax  2ax.c
(d/dx)^2 (ax^2 2ax(x.c) + ax.c^2) = 2a
III. for (c); same as for (b).
So, what we have seen so far is
(1) differential equations in the second degree
 the wave equations  must clearly be the same for
all forms: the privileged form in x, the generalized
cartesian form in x and the centroid, x.c, or the
transformed generalized cartesian form.
That is, anyone who imagines that correct usage
gives different results for galilean transformed
frames is at first showing his ignorance, and in
the end showing his intellectual corruption.
(2) As far as the First Derivatives are concerned, the
only cases in which there really is a difference between
the two forms is where x.c <> 0, and in that case, the
use of the privileged form is obviously incompetent.
So, how do you correctly use the differential equations?
If you are using rest frame data with the centroid
at x=0, etc, you can't go wrong without trying to
go wrong.
If you are using rest frame data with the centroid
not at x=0, you must use (xx.c) anyplace x appears
in the equation.
If you are using moving frame data, you must use the
moving frame centroid as well as the light front
(or whatever) moving frame data itself, perhaps first
calculating (x'x.c'), which equals (xx.c) which is
obviously correct, and which is obviously the plain old
correct x of the privileged form.
Unless, of course, there really is some magical term
or expression that invalidates the obvious and elemen
tary algebra of the invariance demonstration.
Or maybe you just whistle when you don't want basic
algebra to hold true.
Eleaticus
!?!?!?!?!?!?!?!?!?
! Eleaticus Oren C. Webster ThnkTank@concentric.net ?
! Anything and everything that requires or encourages systematic ?
! examination of premises, logic, and conclusions ?
!?!?!?!?!?!?!?!?!?
Supersedes:
XLastUpdated: 1999/10/17
====
Subject: (SR) Lorentz t', x' = Intervals
Summary: The Lorentz transforms themselves are proof t' and x'
cannot possibly be just coordinates. Examination
of their derivation verifies their identity as
intervals.
Originator: faqserv@penguinlust.mit.edu
Disclaimer: approval for *.answers is based on form, not content.
Opponents should first actually find out what the content is,
then think, then request/submitto arbitration by the
appropriate neutral mathematics authorities. Flaming the hard
working, selfless, *.answers moderators evidences ignorance
and atrocious netiquette.
Version: 0.02.1
Archivename: physicsfaq/criticism/lorentzintervals
Postingfrequency: 15 days
(SR) Lorentz t', x' = Intervals
(c) Eleaticus/Oren C. Webster
Thnktank@concentric.net

====
Subject: 1. Introduction with the obvious debunking
of the use of 'just coordinates' in any
scientific formula.
Defenders of the Special Relativity faith are especially
fond of telling opponents of their spacetime fairy tales
that they do not know the difference between coordinates
and magnitudes.
That may often be so, but the fault lies ultimately with
SR dogma. The LorentzEinstein transformations cannot
possibly be 'just coordinates', which is the interpre
tation required to support the many sideshow carnival acts
with which the SR faithful bedazzle the public, and establish
their moral and intellectual superiority.
If I get in my car and drive steadily for a few hours at 50
kilometers per hour, is 50t the distance I travel?
Of course not. The last time my hourscounting 'just coord
inates' clock was set to zero was when Zeno first reported
one of his paradoxes to Parmenides.
That was a long time ago, so my t is not useful for such
purposes unless you also use my clock to established the starting
time, perhaps t0, and use the formula 50(tt0) to calculate the
distance.
In any case, my t is even then not 'just a coordinate' because
it always represents particular elapsed times that can be
used in the (tt0) form to calculate perfectly good time
intervals (elapsed times).
Alternatively, I could (re)set my clock to zero at the start
of some meaningful time interval, in which case my t shows a
scientifically perfect current and/or end time.
In which case, the LorentzEinstein t'=(tvx/cc)/g is a function
of an elapsed time interval (not 'just a coordinate') and a time
interval (vx/cc; the interval amount the t' clock is being
screwed up at time t) and thus cannot be 'just a coordinate'
since neither of the independent variables is such a 'just' thing.
{Their meaning is shown below, stepbystep.]
If it takes me 50 minutes to cross the Interstate highway,
was x/50 my velocity crossing it?
Of course not. The origin of all my axes is at the very
spot where Zeno first presented his first paradox to
Parmenides. That makes my x equal a couple of thousands of
miles, plus, and is not useful for such purposes unless
you establish the starting x value, perhaps x0, and use the
formula (xx0)/50 to calculate my velocity.
In any case, even then my x is not 'just a coordinate'
because it always repesents particular distance intervals
that can always be used in the (xx0) form for any and every
scientific purose.
Alternatively, I could move my xaxis origin to the starting
(zero) point of some meaningful distance, in which case my x
shows a scientifically perfect current and/or end distance.
In which case, the LorentzEinstein x'=(xvt)/g is a function
of a current/ending distance interval (not 'just a coordinate')
and a distance interval (vt; the interval amount the x' axis
is being screwed up at time t) and thus cannot be 'just a coordinate'
since neither of the independent variables is such a 'just' thing.
{Their meaning is shown below, stepbystep.]

====
Subject: 2. Table of Contents
1. Introduction with the obvious debunking
of the use of 'just coordinates' in any
scientific formula.
2. Table of Contents.
3. The LorentzEinstein transforms.
4. The 'just coordinates' argument.
5. Singlesystem, littlepurpose ambiguity.
6. Relating two coordinate measures/systems.
7. Distances and moving coordinate axes.
8. Time intervals.
9. Einstein's (1905) derivations.
10. A word about intervals.
11. Intervals versus the Twins Paradox.
12. Summary

====
Subject: 3. The LorentzEinstein transforms
Special Relativity's spacetime circus is based on
the 'transformation' equations by which it is believed
one can relate a nominally 'stationary' system's space
and time coordinates to those of an inertially (not
accelerating) moving other observer.
That moving observer's own physical body and coordinate
system might have been identical in size to those of the
stationary observer before the traveller began moving,
but are 'seen' as very different by the stationary observer
when the relative velocity of the two is great enough, a
high percentage of the velocity of light.
Concerning ourselves  as is customary  with just
the spatial coordinate axis that lies parallel to
the direction of motion, and with time, Einstein
arrived at these formulas that relate the moving
system measures or coordinates (x' and t') to the
stationary system coordinates (x and t):
x' = (x  vt)/sqrt(1vv/cc) (Eq 1x)
t' = (t  vx/cc)/sqrt(1vv/cc) (Eq 1t)
The v is for the two systems' relative velocity as seen
by the stationary observer, and is positive if the dir
ection is toward higher values of x. By concensus,
the moving system x'axis higher values also lie in
that direction, and all axes parallel the other system's
corresponding axis.
We used vv to mean the square of v but might use v^2
for that purpose below. Similarly for c.
Because it is believed that no physical object can
reach or exceed c, the squareroot term in both
denominators is presumed always less than one, which
means that the formulas say both x' and t' will tend to
be greater than x and t, respectively. However,
SRians call the x' result 'contraction'  which means
shortening  and the t' result 'dilation'  which
means increasing.

====
Subject: 4. The 'just coordinates' argument
The 'just coordinates' argument is so patently ridiculous
that even opponents have a hard time accepting just how
simple and obvious its debunking can be, as shown in this
section. However, further sections take a more arithmet
ical approach that you'll maybe find more professorial.
The 'just coordinates' argument is that t is mot a
duration, not a time interval; it's just an arbitrary
clock reading. But what if the moving system observer
comes speeding by while you make your annual 'spring
forward' or 'fall back' change? The formula says that
the moving system clock's 'just coordinate' reading
can be calculated from yours:
t' = (t  vx/cc)/sqrt(1vv/cc) (Eq 1t)
Imagine the moving system oberver's confusion if his
clock changes its reading while he's looking at it!
If his clock doesn't change when yours does, the formula
is wrong; if it is truly a 'just coordinates' formula.
And then what happens if you realize you were a day
early and put your clock back to what it had said
previously?
And suppose you are in NYC and your twin in LA and
both are watching the moving observer. You'll both be
using the same v because you are at rest wrt (with
respect to) each other. You're on Eastern Standard
Time and your twin is on Pacific Standard Time
maybe. You have three hours more on your clock than
does your twin.
On which 'just coordinate' clock will the Lorentz
transforms base the 'just coordinate' time the moving
system clock says? The formula applies to both of
your ttimes:
t' = (t  vx/cc)/sqrt(1vv/cc) (Eq 1t)
Sure, the idea that you can change someone else's
clock with no connection of any kind is really
ridiculous, but Eqs 1x and 1t aren't MY equations.
Are they yours? And we aren't the ones to say x, t,
x', and t' are just coordinates.
If the t' formula is actually either an elapsed
time formula, or the basis of a t'/t ratio, then
there is no implication that one clock's reading
has anything to do with the other's.
It can only be rates of clock ticking, or how one
time INTERVAL compares to the other that the formula
is about.

====
Subject: 5. Singlesystem, littlepurpose ambiguity.
Since we're going to be comparing measurements on two
coordinate systems in the next section, let's go to
our supply cabinet and get our yardstick (which we
use to measure things in inches) and our meterstick
(which we use to measure things in centimeters).
Here, I'm getting mine. Oh! Oh!
There's an ant on mine, and he ... she ... sure is
hanging on, right at the 3.5 inch mark of the yard
stick.
Let's see if I can wave the stick around enough that
she'll let go. Nope.
However, before I gave up I waved the stick and the
ant 'all over the place.
Always, however, the ant was at the 3.5 mark on the
yardstick, and always 3.5 away from the end of the
stick, however far and wide I have transported her.
Neither of those 3.5 facts means very much. Of the
two, the distance aspect meant almost nothing. So
the distance was 3.5 from the end. So what? That
length, distance, was not in use. And only maybe
the ant might have been concerned with just what
location, 'just coordinate', on the stick she was
at.
Just so with x and t.
So, is the 3.5 reading just a coordinate? Or a
distance/length? It's ambiguous in and of itself,
and really makes no difference what you say until
you try to make use of the number.
Hey, my address is 5047 Newton Street. If you
are looking for me and you're at 4120 Newton, it
is helpful information, because it tells you which
direction to go. Is that 'just coordinate'?
Where it really becomes useful, perhaps, is in
telling you how far away I am. That's not just
a coordinate value, that's a distance, length,
interval.
However, it is subtracting 4120 from 5047 that
tells you which direction and how far. It is only
because both 5047 and 4120 are distances from the
same point  ANY same point  that the result means
anything.
My x  my yardstick reading  is always a distance
or length; it is impossible to be otherwise with
an honest, competently designed yardstick.
Whether or not its reading is of good use in some
particular scientific formula depends on whether
I put the zero end of the yardstick at some useful
place. As in the introduction, we should either
put it at the starting location/end, or use two
readings from it: (xx0).

====
Subject: 6. Relating two coordinate measures/systems.
Taking care to not damage our brave little ant, I place
my yardstick onto the table, zero end to the left, 36
end to the right.
Now I place the 'just coordinate' meterstick on the table
in the same orientation, in a random location, and find
that the ant's coordinate on the meterstick is 51.
The formula relating centimeters to inches is cm=i*2.54
but we want a formula similar to x'=(xvt)/sqrt(1vv/cc).
That would be c=i/.03937 approximately, but let's use x'
for the meterstick reading, and x for the inch reading:
x'=x/.3937.
3.5/.3937 = 8.89
Wait a minute. It's not just science but definition
that says c=i/.3937=8.89, so something is wrong. 8.89
is not 51.
We already knew that 51 cm was just an arbitrary coordinate.
Arbitrary not because that point isn't 51 cm from the zero
end of the meterstick, but because the zero point was in an
arbitrary position.
Let's put the meterstick in a position where it's
zero point is at the yardstick zero point.
What is the centimeter coordinate now? Hey. 8.89,
just like the formula says.
The only way for a 'transform' like x'=x/g to work,
whatever g might be, is for both coordinate systems
to have their zero points aligned, in which case
saying the two measures are not intervals is pure
idiocy.
Noe that with both zero points at the same position
both x' and x are great measures for scientific
purposes, in any and every case where we were smart
enough to put those zero points at a useful location.
There is one extension of x'=x/g that will let us
use the meterstick in arbitrary position.
When the cm reading was 51, the zero point of the
yardstick read (518.89=) 42.11 cm. If we call that
point x.z' we get
x' = x.z' + x/.3937.
= 42.11 + 3.5/.3937
= 42.11 + 8.89
= 51.
Obviously, in this formula x/.3937 is the distance
from the x' coordinate of the location where x=0.
An interval.
Just as obviously, the fact that we now have the
correct formula for relating an x interval to an
arbitrary x' coordinate, does not mean that x'
is anything more than nonsense for use in any
scientific formula.
Unless we were smart enough to put the x zero
point in a useful location, and use (x'x.z') in
the scientific formula. (x'x.z') equals the useful,
Ratio Scale value x/.3937.
So, we have discovered a basic fact: a transformation
formula like x'=x/g works only if the two zero points
of the coordinate systems coincide. That makes it non
sense to say the two coodinates are only coordinates
and not intervals. Both must be values that represent
distances from their respective zero points unless you
take the proper steps to adjust for the discrepancy.
Make sure you understand that although the inclusion
of x.z' made it possible to correctly calculate x',
the result is nonsense when it comes to use of x'
for general length/distance purposes; it is x'x.z'
that is a useful number in such cases. It could be
that we're measuring a sheet of paper with one end
at x=0 and the other at x=3.5; x'=51 is nonsense as
a centimeter measure of the paper.
But, you say, the Lorentz transform contain a vt term.

====
Subject: 7. Distances and moving coordinate axes.
We discovered x'=x.z' + x/g as the correct formula
for relating one coordinate to another system's.
But the Lorentz transform contains another term,
vt/sqrt(1vv/cc). What is it?
Let's start with our x'=51 cm, x=3.5, x.z'=42.11 example.
Every minute, let's move the meterstick one inch to our
right.
At minute 0, the cm reading was 51 cm.
At minute 1, the cm reading is now 50 cm.
At minute 2, the cm reading is now 49 cm.
In this instance, v=1 inch/minute. And t was 0, 1, 2.
What has happened is that we have made our x.z' a lie,
and increasingly so. vt/.3937 is the change in x.z'.
x' = (x.z  vt/.3937) + x/.3937.
Obviously, vt/.3937 is not a coordinate; even most SRians
wouldn't imagine it was. It is an interval, the distance
over which the moving system has moved since t=0.
And, of course, x/.3937 is the distance of our brave
little ant from the point where x=0 and the centimeter
reading is x.z'vt/.3937. Yes, every minute the meter
stick moves to the right and the meterstick coordinate
of the spot where x=0 gets less and less  and eventually
negative.
Make sure you understand that every minute the x'
coordinate, because of vt/g, becomes a better measure
of, say, the 3.5 paper we might be measuring with
the yardstick, given that 51 was too big a number and
vt is negative. That is, until the two origins coincide
at x'=x=0, and then it gets worse and worse.
With vt positive (because v<0) the situation is different.
With 51 and vt positive, x' just gets worse and worse
over time.
Quite obviously, the fact that we now have the
correct formula for relating an x interval to an
arbitrary x' coordinate even when the x' axis is
moving, does not mean that x'is anything more than
nonsense for use in any scientific formula.
Unless we were smart enough to put the x zero point
in a useful location, and use (x'x.z'+vt/.3937) in
the scientific formula. (x'x.z'+vt/.3937) equals the
useful, Ratio Scale value x/.3937.

====
Subject: 8. Time intervals.
Instead of using our sticks, let's get out two clocks.
Mind you, we're not going to deal with different clock
rates here, just establish the same basics as for distance.
Your clock says 9:00 Eastern Standard Time (EST) and we
note that t=540 minutes when we put down the clock.
Blindly, let's turn the setting knob of your twin's Pacific
Standard Time clock and put it down before us.
According to what we see, EST's 540 minutes (9:00) corre
sponds to PST's 14:30; t'=870.
We know the formula relating PST to EST is t' (pacific)
= t (eastern)  180 (minutes). Thus, it is not correct
that the second clock can have an arbitrary setting,
because 870 <> 540180.
We know that the two clocks are related by t' = t/1 since
both are using the same second, hour, etc units. But 870
(14:30 in minutes) is not 540/1180, so once again we know
something is wrong.
However, t'=t.z' + t/1 works. EST midnight equals PST 0.0
(midnite)  180, so t.z' = 180, and
t' = 180 + 540/1 = 360.
Since EST180=PST, 9:00 EST is 6:00 PST = 360 minutes.
We see thus that like distance measures/coordinates, time
axis origins (zero points) must either be 'lined up' or
adjusted for.
So, the Lorentz/Einstein t'=t/sqrt(1vv/cc) must be the moving
system elapsed time interval since the time axes were both at
a common zero. There is no t.z' adjustment:
t' = (t  vx/cc)/sqrt(1vv/cc) (Eq 1t)
Make sure you understand that in the clock case, if the
EST is showing a good number for elapsed time since the
travelling observer passed NYC, then the PST clock is
silliness. t.z' must be zero or must be taken out of
time lapse calculations for the PST clock to be used
intelligently, just as was true for x.z'.
What is lacking as yet for Lorentz t' is the vx/cc term that
corresponds to the x' formula vt term.
Break it up into two parts: v/c and x/c.
v/c is a scaling factor that changes velocity from whatever
kind of unit you are using over to fractions of c.
x/c is distance divided by velocity, which is time. x/c
is thus the time interval since the two time axes
had a common zero point  which they have to have in the
Lorentz transforms which do not have the t.z' term we
learned to use above.
Thus, (vx/cc)/sqrt(1vv/cc) is the interval amount the
moving system clock has been changed  since the common/
adjusted time  over and beyond the elapsed time interval
represented by x/sqrt(1vv/cc).
We have discovered that the only way for t' to be t/g
is for t' and t to have a common zero point, just as
for x' and x. It would be otherwise if the t' formula
contained an adjustment t.z' under some name or other,
but the necessity to include such a term correlates
100% with t' numbers that aren't directly usable.
As for x and x', our knowledge of how to setup a proper
formula relating t and t' is of no use unless we use
the knowledge in scientific formulas; (t't.z'+xv/gcc)
gives us the only directly useful value: t/g.

====
Subject: 9. Einstein's (1905) derivations.
When we return to Einstein's derivations of the transform
formulas with a wellfocused eye, we find he was a wee bit
confused  or at least selfcontradictory.
When he set up his (at first unknown) tau=moving system
time formulas, he created three particular instances of tau.
Tau.0 is the time at which light is emitted at the moving
origin toward a mirror to the right that is moving at rest
wrt that moving origin and at a constant distance from that
origin. He lets the stationary time slot have the value t,
a constant, the stationary system starting time.
Tau.1 is the time at which the light is reflected. He
lets the stationary time be t+x'/(cv); t is still a
constant and x'/(cv) is the time interval since t.
Tau.2 is the time at which the light gets (back) to the
moving origin. The stationary time value is put as t +
x'/(cv) + x'/(c+v); t is still a constant and x'/(cv)
+ x'/(c+v) is the time interval since t.
On the thesis that the moving observer sees the time to
the mirror as the same as the time back to the origin,
he sets
.5[ tau.0 + tau.2 ] = tau.1.
Tau.0 completely drops out of the analysis and leaves
no trace, and has no effect.
Further, the t you see in tau.0, tau.1, and tau.2 also
completely drops out with no trace and no effect, leaving
us with exactly what you'd get if you had explicilty said
t' is an interval and so is t.
What doesn't drop out in the stationary time values is
x'/(cv) and x'/(c+v), the time interval it takes for
light to get to the fleeing mirror, and the time interval
it takes for light to get back to the approaching origin.
Thus, his resultant t' formula is strictly based on time
intervals in the stationary system. Time intervals since
some starting time, yes, but time intervals.
There is absolutely nothing in the derived formulas that
depends on arbitrary coordinates like the constant t in
the stationary time arguments.
Let's look at the x dimension; it is x'=xvt [as x increases
by vt, the effect over time is x'=(x+vt)vt)], which Einstein
explicitly sets up as a constant stationary distance.
He uses that x' not just in the time interval parts of the
stationary time arguments, but also in the x (distance)
stationary system argument for the tau at the time light
is reflected.
x' can't be the stationary system coordinate of the mirror
at that time. That value is x'+vt.
x' is explicitly an interval, distance.
Thus, the whole tau derivation of the t' formula is fully and
explicitly based on x'  a spatial length/distance/interval 
and the two time interals x'/(cv) and x'/(c+v).
While we're at it, if the starting t is not zero, his
x'=xvt formula is complete nonsense also. Given that
there was some L that was the mirror xlocation and length
when the light is emitted, if t was already, say, 500, then
x'=Lvt could have been a very negative length.

====
Subject: 10. A word about intervals.
There are intervals, and there are intervals.
If we put our yard stick zero point at one end
of a piece of paper and read off the coordinate
at the other end of the paper, we have a good
measure of the paper's length, a Ratio Scale
measure. [Absolute temperature scales are ratio
scale.]
If instead we put the one end of the paper at the
one inch mark (or the zero end of the stick one
inch 'into' the length of the paper) we get measures
that are one inch off the true, ratio scale length.
The two messed up measures are still intervals,
but they are Interval Scale measures. [Household
temperature scales are interval scale, which is
why your physics and chemistry professors won't
let you use them without first converting to the
ratio scale absolute temperatures.)
t'=t/g and x'=x/g represent ratio scale measures,
given that t and x were ratio scalae to start with.
t'=t.z'+t/g and t'=t/gvx/gcc are both interval
scale measures, even given a good ratio scale t
and a good ratio scale x.
x'=x.z'+x/g and x'=x/gvt/g are both interval
scale measures, even given a good ratio scale x
and a good ratio scale t.
Look for the (SR) Lorentz t', x' = degraded measures
document soon at a newsgroup near you.

====
Subject: 11. Intervals versus the Twins Paradox.
t'=(tvx/cc)/g shows t' being greater than t.
The reason Special Relativity will not allow the
use of its basic time equation in determining what
SR has to say about the twins' ages, is that t' and
x' are supposedly just coordinates, and they say you
have to take the coordinate pairs (t',x') and (x,t)
into consideration in both the time and place the
twins' separation started and the time and place the
twins reunited.
Since t' and x' are actually both intervals, not
just coordinates, the 'excuse' is spurious, and is
so even without use of the obvious (x_bx_a) and
(t_bt_a) usages.
However, SR is right to be embarrassed by their
transformation formulas.
Look for the (SR) Lorentz t', x' = degraded measures
document at a newsgroup near you.

====
Subject: 12. Summary
A. t'=t/g and x'=x/g can be almost 'just coordinates'
in the sense that the values obtained may not be
of much use except in the most primal and useless
way: how long and how far since/from the time/
place they were zero. Even here, however, the zero
points within each of the two scale pairs (t',t)
and (x'.x) must have been lined up. If the zero
points have been intelligently selected (such as
at the starting point and time of a trip) they
can be rationally used 'as is' in any valid sci
entific equation.
B. Even the interval scale t'=t.z'  xv/gcc + t/g and
x'=x.z'  vt/g + x/g are not 'just coordinates'. They
can be used to good effect by establishing the relevant
starting times/points and using (t't.z'+xv/gcc) and
(x'x.z'+vt/g), as the situation may require.
C. When you see vx/gcc or vt/g in use in any guise with nonzero
values, you know the resultant t' or x' is a degraded, interval
scale value.
EX: Anytime you do not see what amounts to t.z' and xv/gcc in
the time case, or x.z' and vt/g in the distance case, you
know that the t' and/or x' in use are intervals. Period.
Y: Either set your clock to zero at the start of the relevant
time interval, or use (tt0), with both being readings on
the same clock. Either move your xaxis origin to the starting
end or point, or use (xx0), with both being readings on
the same axis.
Z: In _(SR) Lorentz t', x' = Degraded (Interval) Scales_ we see
that t' and x' satisfy the mathematical tests for/of interval
scales when vt and vx/cc are not zero; thus, they must
be intervals. When vt and vx/cc are zero, t' and x'
satisfy the much better mathematical definition of
ratio scales, and are thus not just mere intervals,
but (rescaled) good ones.
Eleaticus
!?!?!?!?!?!?!?!?!?
! Eleaticus Oren C. Webster ThnkTank@concentric.net ?
! Anything and everything that requires or encourages systematic ?
! examination of premises, logic, and conclusions ?
!?!?!?!?!?!?!?!?!?
====
Subject: Re: Statistical Physics, Economics & Demographics (was: why [...]
physics [...] in finance and biology?)
Why can't they be mathematical
statisticians?
>
> Because it's in an empirical science.
What ? Statistics ? Are you kidding ?
====
Subject: A variation on the Secretary Problem
Hey everyone,
I have a variation on the secretary problem I'd like to discuss. I've
been toying with it, and I'd love to hear people's ideas.
The problem is such: A sultan prince is ready to marry a wife. One
hundred of the most beautiful women have been gathered from across the
kingdoms to present themselves to him. The sultan sees the women one at
a time, and upon seeing each one he either chooses her or rejects her.
He cannot go back on his decision. The question is, how can he maximize
the expected beauty of his wife.
In the normal problem, the question is to find the best strategy to get
the best wife. This is different  what's the strategy to get the
highest expected beauty?
At first, I tried it this way:
Assume the women are ranked 1100. Pick k women and reject them, then
take the next woman that beats those k. I found that the best strategy
was k=9 (I think, this was years ago), and you get an expected beauty
of about 91.
But, thinking about it some more, I felt there was a flaw in this
reasoning. How could I prove it was the best strategy. I think I've
found a better strategy.
A. Assume we've seen 99 women. We must pick the last one. What is her
expected beauty? 50.5.
B. So now assume we've seen 98 women. Using A, If the woman looks
better than a 50.5 given the previous 98, pick her. Otherwise, reject
her. What is her expected beauty?
This one must be either more beautiful than average (48/99 of the
time), less beautiful (48/99 of the time), or in the middle (1/99) of
the time.
If she is more beautiful, her expected beauty is 75.5 If she is less
beautiful, her expected beauty is 50.5. If she is less beautiful, her
expected beauty is 25.5
So then 49/99 of the time, we get 75.5, 1/99 of the time, we get 50.5,
and 49/99 of the time, we reject her and go on to the last one, giving
us a value of 50.5
So 49/99 * 75.5 + 50/99 * 50.5
= (3699.5 + 2525) / 99
= 62.87
So if we reject the first 98 women, we have an expected beauty of
62.87.
C. So now assume we've seen 97 women. Using B, if the woman looks
better than a 62.87 given the previous 97, pick her. Otherwise reject
her...
etc, etc..
Using this strategy recursively, we should be able to find the ideal
stopping point.
Any thoughts? This seems to me like it must be the ideal strategy,
since at any point we can calculate exactly what the expected beauty B
of the next woman would be .. so for any candidate, we accept her if
she is (probably) better than B or reject if she's worse than B.
I'd love to hear any thoughts or comments! I've been thinking about
this for a long time so I'd love to hear if my ideas are way off base
or if I'm onto something :)
Julien
====
Subject: Re: A variation on the Secretary Problem
>I have a variation on the secretary problem I'd like to discuss. I've
>been toying with it, and I'd love to hear people's ideas.
The problem is such: A sultan prince is ready to marry a wife. One
>hundred of the most beautiful women have been gathered from across the
>kingdoms to present themselves to him. The sultan sees the women one at
>a time, and upon seeing each one he either chooses her or rejects her.
>He cannot go back on his decision. The question is, how can he maximize
>the expected beauty of his wife.
The problem is not fully specified unless you tell us how beauty
is distributed.
>In the normal problem, the question is to find the best strategy to get
>the best wife. This is different  what's the strategy to get the
>highest expected beauty?
At first, I tried it this way:
>Assume the women are ranked 1100. Pick k women and reject them, then
>take the next woman that beats those k. I found that the best strategy
>was k=9 (I think, this was years ago), and you get an expected beauty
>of about 91.
Ah, so is it the expected _rank_ that you want to maximize?
If so, you should say so at the start.
I assume you know nothing about the actual distribution of beauty
on any absolute scale, and you just want to maximize the rank of
the woman you choose. So let A(n) be your expected value (under the
best possible strategy) after rejecting the first n women. Thus,
as you say, A(99) = 50.5. Now after rejecting the first n women
suppose you see candidate n+1, whose rank among the women seen so
far is k. You have two possible moves: accept her, and get an
expected score of 101 k/(n+2), or reject and get A(n+1). You
would therefore accept her if and only if 101 k/(n+2) >= A(n+1),
i.e. k >= (n+2) A(n+1)/101.
A(n) = 1/(n+1) sum_{k=1}^{n+1} max(101 k/(n+2), A(n+1)).
Using this recursion (and Maple) I get A(0) =
890068375249377435276407336006408473/9138582027089704342317513701352000
= 97.39677038 approximately. The optimal strategy will always reject
from #68 to #73 in the top 4 so far,
from #74 to #77 in the top 5 so far,
#78 to #80 in the top 6 so far,
#81 and #82 in the top 7 so far,
#83 and #84 in the top 8 so far,
#85 and #86 in the top 9 so far,
#87 in the top 10 so far,
#88 and #89 in the top 11 so far,
#90 in the top 12 so far,
#91 in the top 14 so far,
#92 in the top 15 so far,
#93 in the top 17 so far,
#94 in the top 19 so far,
#95 in the top 21 so far,
#96 in the top 25 so far,
#97 in the top 30 so far,
#98 in the top 37 so far,
#99 in the top 49 or 50 so far.
Robert Israel israel@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia Vancouver, BC, Canada
====
Subject: Re: A variation on the Secretary Problem
Using this recursion (and Maple) I get A(0) =
>890068375249377435276407336006408473/9138582027089704342317513701352000
>= 97.39677038 approximately. The optimal strategy will always reject
>the first 27 women.
Wait a minute  so if A(0) is 97.4, then for what n is A(n) the
highest?
Maybe I'm not understanding what you're saying, but how can A(0) be so
high if you alwasy reject the first 27 women? And if you've rejected 0
women, wouldn't your expected rank be 50.5, just as if you had rejected
them all?
====
Subject: Re: A variation on the Secretary Problem
http://www.mathatlas.org/index/60XX.html
I can't say that I follow exactly how you got this expectation 101
k/(n+2) but I will think about it :)
====
Subject: Re: A variation on the Secretary Problem
If so, you should say so at the start.
To clarify: Yes. The prince wants to maximize expected rank. When he
sees a woman alone he has no idea what her beauty is. And the prince
could see #1 and #100, but would have no idea of their respective ranks
other than one is more beautiful than the other.
====
Subject: Re: A variation on the Secretary Problem
>
>I have a variation on the secretary problem I'd like to discuss. I've
>>been toying with it, and I'd love to hear people's ideas.
>>The problem is such: A sultan prince is ready to marry a wife. One
>>hundred of the most beautiful women have been gathered from across the
>>kingdoms to present themselves to him. The sultan sees the women one at
>>a time, and upon seeing each one he either chooses her or rejects her.
>>He cannot go back on his decision. The question is, how can he maximize
>>the expected beauty of his wife.
>>
>
>The problem is not fully specified unless you tell us how beauty
>is distributed.
>In the normal problem, the question is to find the best strategy to get
>>the best wife. This is different  what's the strategy to get the
>>highest expected beauty?
>>At first, I tried it this way:
>>Assume the women are ranked 1100. Pick k women and reject them, then
>>take the next woman that beats those k. I found that the best strategy
>>was k=9 (I think, this was years ago), and you get an expected beauty
>>of about 91.
>>
>>
Question to Zass: What happens if none of the remaning candidates are
better than the best among the first k? Do you end up choosing the
last candidate?
Ah, so is it the expected _rank_ that you want to maximize?
>If so, you should say so at the start.
I assume you know nothing about the actual distribution of beauty
>on any absolute scale, and you just want to maximize the rank of
>the woman you choose. So let A(n) be your expected value (under the
>best possible strategy) after rejecting the first n women. Thus,
>as you say, A(99) = 50.5. Now after rejecting the first n women
>suppose you see candidate n+1, whose rank among the women seen so
>far is k. You have two possible moves: accept her, and get an
>expected score of 101 k/(n+2), [...]
>
I don't see where you get this expectation. If you have seen n+1
candidates, and k1 of these are of lower rank than the current
candidate, that means you have seen n + 1  k candidates better than
the current one. Therefore, the current candidate is equally likely to
have absolute rank of any of k, k+1,..., 99n+k, so the expected
absolute rank of the current candidate is (99n)/2 + k.
More generally, suppose there are n candidates total (n=100 in this
example). Let V_m(k) be the optimal expected absolute rank of the
chosen candidate given you have inspected m candidates and k of these
are worse than the current candidate (0 <= k < m). The maximal
expected rank is V_1(0). V_n(k) = k+1, and, for m < n,
V_m(k) = maj((nm)/2 + k + 1, sum(i=0..m, V_(m+1) (i)) / (m+1))
I have the suspicion that the best one can do from any policy is end up
with an expected rank is (n+1)/2, but I would have to think about how
to show that. It's been a while since I have worked on such problems.

Stephen J. Herschkorn sjherschko@netscape.net
Math Tutor in Central New Jersey and Manhattan
====
Subject: Re: A variation on the Secretary Problem
<42BBB4ED.9040306@netscape.net>At first, I tried it this way:
>>Assume the women are ranked 1100. Pick k women and reject them, then
>>take the next woman that beats those k. I found that the best strategy
>>was k=9 (I think, this was years ago), and you get an expected beauty
>>of about 91.
>Question to Zass: What happens if none of the remaning candidates are
>better than the best among the first k? Do you end up choosing the
>last candidate?
Correct  in this (flawed) strategy, I tried I ended up picking the
last candidate. But clearly this strategy is inferior  although I'd
be interested in seeing what the expected beauty is there. I did it by
hand years ago and got in the low 90s, which is clearly lower than the
97 Robert got above.
====
Subject: Re: Humans don't exist in America
> Not one.
Homo sapiens was a passing fad.

http://hertzlinger.blogspot.com
====
Subject: Re: Now how did we end up with this genius for President?
On Mon, 20 Jun 2005 22:44:56 0700, Onideus Mad Hatter
>
>>To * US *:
>>You have posted over 150 messages on this thread.
>>GET A LIFE!
>
> Why are you so concerned about his life? ...and why did you spend all
> that time counting how many posts he's made in this thread? In the
> words of...well *you*...
>
> GET A LIFE!
Where do you get lives anyway? I was in Target recently but I didn't
see any section marked LIVES ...

http://hertzlinger.blogspot.com
====
Subject: Re: Now how did we end up with this genius for President?
>
>
>
>>[Bush] wanted a shot at
>>Saddam going into office. The neocons cooked up WMD as a potential
>>cause to promote and he bought it like a trout to a good lure.
>
>
> Which has a near exact parallel in Hitler's contrived reasons
> for invading Poland, namely that Polish insurgents
> attacked a German broadcast station near the border and
> broadcast calls to Silesians (Silesia was then part of
> Poland) to attack ethnic German residents. In
> reality, the leader of the attack was a German
> convict working for Reinhard Heydrich. He was killed
> for his efforts in the raid.
>
> (On a spring day in 1942, Heydrich would distinguish
> himself by not quite completing a Uturn at a streetcar
> crossing in Prague.)
>
====
Subject: Re: Now how did we end up with this genius for President?
>
>>re: Just another LIE! The ship's crew declared THEY had accomplished
>>their
>>mission and the President had nothing to do with it.
>>Nope  the banner was provided by the White House, ace.
>>Tim
>While you are arguing over the real meaning of a scrap of canvas. Take
>another look at what really matters.
>>http://www.homestead.com/prositesprs/wtcjumpers.html
>>http://www.indiadaily.com/editorial/0618f04.asp
>Hey Mark, I'm not arguing about the meaning of a scrap of canvas.
>>You said that the White House didn't say Misson Accomplished. I'm
>>telling you that the banner was provided by the White House. It was
>>just the beginning of the happy talk dance they've been doing since
>>that day. It was wrong and at best serves as a clear demonstration of
>>how stupid they were. At worst, they were being dishonest, but for the
>>moment I'm prepared to presume the latter.
>>With respect to the picture ... well, you're only the 3rd or 4th
>>conservative to toss these up when they lose a point ... take one off
>>for originality  I expect more from you.
>>You may think these horrible pictures change the facts at hand. They
>>don't. The President said what he said, period.
>>Tim
>>P.S. On second thought, you ought to send these to the Bush White
>>judgement and failure to realize what going into Iraq would cause, we're
>>far more likely to see NEW PICTURES of that nature than we were the day
>>before we entered Iraq.
>
> Hey, Tim, you can buy a Club Gitmo Tee shirt by going to
Rushlimbaugh.
> com.
> jt
> 
> Without the second amendment
> the first amendment means nil.
> www.townhall.com
> www.newsmax.com
> www.nranews.org
>
>
ROFLMAO
Tim
====
Subject: Re: Now how did we end up with this genius for President?
<04q5a1952fcnhsk6tqagutvibl01pkqrht@4ax.com>
<42a372ef$2$fuzhry+tra$mr2ice@news.patriot.net>
<42a5c1a4$36$fuzhry+tra$mr2ice@news.patriot.net>
<42a827a6$4$fuzhry+tra$mr2ice@news.patriot.net>
> > > Now how did we end up with this genius for President?
> > Its called an election, dimwit.
> > Me and almost three million more citizens voted to reelect
President
> Bush than that other libtard that the liars in the democrat party
> leadership put on the ballot.
> > We are happy with our choice. Crawl back under your rock and don't
come
> out till you learn how to spell your own name. LOL!
> > This too shall pass. For a while Hitler had the German people
fooled.
> Have you seen the polls lately  support for the war in Iraq
dwindles
> daily, most oppose changes to social security and the economy is
hardly
> booming. Had it not been for 9/11 Bush would have never been
reelected.
> > Had it not been for Clinton's ignoring the threats, 9/11 would just
be
> another day on the calender!
> > A couple of things to keep in mind:
> > 1) Clinton tried to take out al Qaida, but he missed.
> > 2) Bush hasn't had much better luck since he put troops into
Afghanistan
> (easily distracted by something else, perhaps?).
>
> If Osama is not in Afghanistan does that mean you are demanding that
> Bush invade another country unilaterally?
We ask that they please extradite him. And then we remind them of what
we do to countries that either collude with, or can't keep a handle on
their citizens or guests.
Unfortunately, we seem to have lost credibility when it comes to making
such threats. Look at Iran and North Korea. Or Pakistan, where Musharef
is starting to throw his weight around at the behest of Osama's allies
within that country.
> It turned out to be a
> tougher problem than Bush anticipated.
>
> It is a tough problem and Bush knew exactly how tough it would be. If
> you pay attention to his speeches, he reminded us how difficult this
> war would be each and every time he spoke on the subject.
Then why did he become distracted with another, bigger problem?
When Reagan was elected, the Iranians released the hostages, knowing
that he would waste no time in kicking butts and get them out. Bush no
longer has that level of respect (if he ever did) and as a result, he
can't rely on it to negotiate settlements on other, more important
issues.
> Maybe Clinton had some insight
> into the scope of the problem that our current leadership lacks.
>
> It that's true then Clinton wouldn't have allowed Osama to run free for
> the eight years of his presidency and Clinton also wouldn't have
> allowed 19 terrorists to enter the US to live and train to kill
> Americans with airliners. Nope, Clinton didn't know what was going on
> anymore than anyone else.
>
> > 3) Our current problems aren't going to be solved by blaming Clinton.
> He's been out of office for a while now.
>
> Most of us who voted for Bush don't really care all that much about
> what Clinton did except for him serving as an example that the problems
> we face are not unique to Bush or Republicans. I'm willing to give
> Clinton a pass because he did the best he could in the pre 9/11
> political environment. But, if you want to claim that Bush isn't doing
> a good job against terrorism then we must go back and point out that
> Clinton didn't either.
But then there are those who do keep bringing the past up. Probably as
an alternative to having to face the problem at hand.

Paul Hovnanian mailto:Paul@Hovnanian.com

I think you left the stove on.
====
Subject: Re: Now how did we end up with this genius for President?
<42a5c1a4$36$fuzhry+tra$mr2ice@news.patriot.net>
<42a827a6$4$fuzhry+tra$mr2ice@news.patriot.net>
<42B76CA5.D4A00470@Hovnanian.com>
<5ELte.31254$J12.7404@newssvr14.news.prodigy.com>
<69Xte.8999$pa3.5129@newsread2.news.atl.earthlink.net>
I've watched what he's done since the day that he declared hostilities
> all but over.
His decisions and lack of judgment or foresight have cost American
> lives, Mark.
>
Bush is doing just fine so you are talking about Clinton, right? You
must be talking about how Clinton failed to accept Saddam on a silver
platter because he was afraid of the political consequences, or how he
paid North Korea to develop their nuclear weapons program in secret so
he wouldn't have to deal with that either. You could also be talking
about how Clinton failed to arrest those 19 9/11 hijackers during the 5
YEARS that they lived in the US and trained to kill Americans right
under his nose. Richard Clarke should be in jail for gross negligence
and incompetence.
Go back and listen to Bush's speech again. Take special notice of the
part where he says that only major combat operations are over but there
is a lot of hard work left to do. Do you understand what major combat
operations are? Its the engagement of enemy tanks, and air force
fighters and divisions of troops commanded by an organized military
command structure. Do you see the insurgents in Iraq using tanks or
fighter planes or attacking US troops with divisions of soldiers? Get
real, Dude. The insurgents are so afraid of US troops they plant bombs
in the street and run away to hide in the shadows like cock roaches.
You are living in a fantasy world encouraged and supported by the lies
of CNN, CBS and Newsweek. Give it up, dude! Stop committing treason
against your country by providing aid and comfort to the enemy during a
time of war.
====
Subject: Re: Now how did we end up with this genius for President?
>
>
>
>>Mark, not only have I listened what to what he said more than once, but
>>I've watched what he's done since the day that he declared hostilities
>>all but over.
>>His decisions and lack of judgment or foresight have cost American
>>lives, Mark.
>
>
> Bush is doing just fine so you are talking about Clinton, right?
Actually, I wasn't.
You
> must be talking about how Clinton failed to accept Saddam on a silver
> platter because he was afraid of the political consequences ....
Conservatives share part of the blame for this one. Clinton was keeping
his head donw. He felt that any move at that particular moment would
have been an attempt to distract the country from MonicaGate  an error
he admits responsibility for, but given that conservatives were chasing
him about 15 year old property transactions and whether or not he got
head in the Oval Office I'd have to say it was a poor tradeoff,
wouldn't you?
or how he
> paid North Korea to develop their nuclear weapons program in secret so
> he wouldn't have to deal with that either.
Clinton's foreign policy foresight could justifiably be compared to the
lack of wisdom our current President is demonstrated on this particular
matter, but I'm sure you're completely convinced that refusing to talk
to North Korea is more excusable than actually negotiating and then
having them renege, so I won't waste our time on this particular matter.
And then we have Iran, don't we??
You could also be talking
> about how Clinton failed to arrest those 19 9/11 hijackers during the 5
> YEARS that they lived in the US and trained to kill Americans right
> under his nose.
I suppose when Hillary was channeling Eleanor Roosevelt she could have
picked up those hijackers on her ouija board .... of course, Nancy and
her astrologers could have done the same thing, but I guess a couple of
them weren't born yet ...
Richard Clarke should be in jail for gross negligence
> and incompetence.
>
Richard Clarke laid it out for Condi and Condi told him to stand in
line. This one's not on him, bud.
> Go back and listen to Bush's speech again. Take special notice of the
> part where he says that only major combat operations are over but there
> is a lot of hard work left to do. Do you understand what major combat
> operations are?
Yep. Do you?
Its the engagement of enemy tanks, and air force
> fighters and divisions of troops commanded by an organized military
> command structure.
Oh.
Do you see the insurgents in Iraq using tanks or
> fighter planes or attacking US troops with divisions of soldiers?
I'm fairly certain that the US soldiers fighting without the tanks that
Rummy decided they didn't need appreciate the distinction you're drawing
here.
Get
> real, Dude.
I think I have been, thank you.
The insurgents are so afraid of US troops they plant bombs
> in the street and run away to hide in the shadows like cock roaches.
They also strap explosives to their backs wrapped in shrapnel, walk into
police stations or past US checkpoints and BLOW THEMSELVES UP. That
demonstrates a pretty scary level of commitment. Look at the USSR in
Afghanistan. Look at Vietnam. Neither ended well for the party with
dramatically superior firepower.
> You are living in a fantasy world encouraged and supported by the lies
> of CNN, CBS and Newsweek.
And the Washington Times. And the Wall Street Journal. And the
McLaughlin Group ... and Jon Stewart ... and when I'm really desperate,
FOX NEWS SUNDAY.
Give it up, dude! Stop committing treason
> against your country by providing aid and comfort to the enemy during a
> time of war.
>
I guess I better check the front door for the cops now, right?
T
====
Subject: Re: Applications of Continued Fractions?
Le 23062005, JeanClaude Arbaut a
.8ecritæ:
>> (1) Pell's Equation (x^2  d y^2 = k, usually k=1)
>> (2) Approximating log_b x, where b and x are integers, using only
>> integer arithmetic.
Nobody seems to have mentionned that he first modern algorithm for
factoring large integeres was using continued fractions (of the
square root of this integer).

Thomas Baruchel
====
Subject: Re: Applications of Continued Fractions?
Le 23062005, JeanClaude Arbaut a
.8ecritæ:
> (4) help prove a number is irrationnal (infinite c.f.) or transcendant
(some
> properties on the partial fraction). Look for diophantine
approximation
> the web. If I remember well, an irrationnal algebraic number cannot be
> approximated too well, if it is, it's actually a transcendant number.
Don't forget quadratic numbers ; in a project such as Plouffe's one
(identifying a given arbitrary number), the first thing to do is
compute its continued fraction. If the number is quadratic you can
identify it as soon as you have done it. Of course you now have
other ways to do it (linera dependencies between the number, its
square and 1), but it is still a good approach for studying a
number when you don't know what it is made of.

Thomas Baruchel
====
Subject: Re: Applications of Continued Fractions?
I've used continued fractions to
1. find rational points via the ellpointtoz GPPari function from elliptic
curves to the modular field. This applies to points divided or multiplied
by n.
2. pull rational points on elliptic curves through nisogenies without
explicitly denoting the exact isogeny in question
They are able to locate the x term of a rational point fairly quickly, if
we know the approximate z value.
====
Subject: Re: Applications of Continued Fractions?
On 23 Jun 2005 16:23:30 0700, Proginoskes
>Does anyone know of any nice applications of continued fractions, other
>than:
(1) Pell's Equation (x^2  d y^2 = k, usually k=1)
>(2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
> 
Here is a reference that should be of some use to you:
http://plus.maths.org/issue11/features/cfractions/
The historical example of Huygens and gear design is quite nice.
====
Subject: Re: Applications of Continued Fractions?
Le 24062005, Anon a .8ecritæ:
>>Does anyone know of any nice applications of continued fractions, other
Since a computer can't handle real numbers, several ideas exist
in order to represent them in the memory of a computer. Gosper
has given algorithms for performing basic arithmetic operations
with continued fractions resting on the following idea :
 compute on the fly further quotients of the two continued fractions
involved in the operation, as needed ;
 extract on the fly new quotients for the result ;
As long as you can handle infinite streams of integers (or using
lazyevaluated infinite lists) you may consider you compute with
exact reals (see http://contfrac.sourceforge.net ). Of course
it appears to be less useful than approximated representations
(much slower, etc.) but it is an interesting ideas since several
important constants or functions can easely be given as a computer
object with the what is the next quotient ? method.

Thomas Baruchel
====
Subject: Re: Applications of Continued Fractions?
> Le 24062005, Anon a
.8ecritæ:
> Does anyone know of any nice applications of continued fractions, other
>
> Since a computer can't handle real numbers, several ideas exist
> in order to represent them in the memory of a computer. Gosper
> has given algorithms for performing basic arithmetic operations
> with continued fractions resting on the following idea :
>
>  compute on the fly further quotients of the two continued
fractions
> involved in the operation, as needed ;
>  extract on the fly new quotients for the result ;
>
> As long as you can handle infinite streams of integers (or using
> lazyevaluated infinite lists) you may consider you compute with
> exact reals (see http://contfrac.sourceforge.net ). Of course
> it appears to be less useful than approximated representations
> (much slower, etc.) but it is an interesting ideas since several
> important constants or functions can easely be given as a computer
> object with the what is the next quotient ? method.
How is it better than having an infinite (lazy) list of digits ?
====
Subject: Re: Applications of Continued Fractions?
i don't know if somebody else has mentioned this, but continued
fraction expansion are also intimately related to Pad'{e}
approximation of analytic functions. they are used to analyze
convergence of the Pade approximants.
====
Subject: Re: Applications of Continued Fractions?
> Does anyone know of any nice applications of continued fractions, other
> than:
(1) Pell's Equation (x^2  d y^2 = k, usually k=1)
> (2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
>
I've received great replies, but I forgot to mention what context I'm
looking for things in.
I teach a problemsolving course at Arizona State, intending mainly to
prepare undergraduate students for the Putnam Exam, without requiring
that they actually take one. I want to include a special topic this
semester (other than recurrence relations, which is what I've done in
the past) and considered continued fractions. It's a nice topic which
doesn't require deep mathematics, and it doesn't conflict with the
Number Theory course which is offered.
I also intended to do things like approximating a real number and
finding what real number a periodic continued fraction represents but
was interested in nonobvious applications.

====
Subject: Re: Applications of Continued Fractions?
> I also intended to do things like approximating a real number and
> finding what real number a periodic continued fraction represents but
> was interested in nonobvious applications.
Theorem (Conway). Two rational tangles constructed from integer
sequences are isotopic if and only if their associated continued
fractions evaluate to the same rational number.
Tangles are a concept from knot theory, and I think any application
of continued fractions there counts as nonobvious. There's a
discussion in Peter Cromwell, Knots and Links.

Gerry Myerson (gerry@maths.mq.edi.ai) (i > u for email)
====
Subject: Re: Applications of Continued Fractions?
Does anyone know of any nice applications of continued fractions, other
> than:
> > (1) Pell's Equation (x^2  d y^2 = k, usually k=1)
> (2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
>
I probably should have mentioned that the applications I'm looking for
are for a problemsolving class I teach at Arizona State, and this
would be a special topic for the semester, so I can't do anything
really deep.
> (3) providing good rationnal approximations of any positive real number.
reminding me, though.
> (4) help prove a number is irrationnal (infinite c.f.) or transcendant
(some
> properties on the partial fraction). Look for diophantine
approximation
> the web. If I remember well, an irrationnal algebraic number cannot be
> approximated too well, if it is, it's actually a transcendant number.
Isn't this backwards, that if you _can_, then it's transcendental?
(Especially for Liouville numbers)
> (5) computing numerically many more functions than log (erf comes to
mind).
These last two are probably beyond the scope of the course, but they
make sense.
> (6) providing formulas for e, pi, etc.
I was going to show the continued fraction expansions for e (well,
actially e1) and pi.
> (7) solve diophantine equation a*x+b*y=1
I learned how to do this using the Euclidean algorithm, although I
guess that's how you find the continued fraction for a rational number
(in disguise). I probably won't cover this since they do it in the
Number Theory class.
> There are certainly many more I don't know of (or I forgot :)).
Well, that's a better list than I came up with! 8)

====
Subject: Re: Applications of Continued Fractions?
>
>
>>
> Does anyone know of any nice applications of continued fractions, other
> than:
>
> (1) Pell's Equation (x^2  d y^2 = k, usually k=1)
> (2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
>
>
> I probably should have mentioned that the applications I'm looking for
> are for a problemsolving class I teach at Arizona State, and this
> would be a special topic for the semester, so I can't do anything
> really deep.
>
>> (3) providing good rationnal approximations of any positive real number.
>
> reminding me, though.
It's a very good motivation I think. And the 355/113 is rather
impressive.
>> (4) help prove a number is irrationnal (infinite c.f.) or transcendant
(some
>> properties on the partial fraction). Look for diophantine
approximation
>> the web. If I remember well, an irrationnal algebraic number cannot be
>> approximated too well, if it is, it's actually a transcendant number.
>
> Isn't this backwards, that if you _can_, then it's transcendental?
> (Especially for Liouville numbers)
That's what I wanted to say, another manifestation of my poor english :)
>> (5) computing numerically many more functions than log (erf comes to
mind).
>
> These last two are probably beyond the scope of the course, but they
> make sense.
>
>> (6) providing formulas for e, pi, etc.
>
> I was going to show the continued fraction expansions for e (well,
> actially e1) and pi.
If it's not too deep, the formula for {0,1,2...} is funny.
>> (7) solve diophantine equation a*x+b*y=1
>
> I learned how to do this using the Euclidean algorithm, although I
> guess that's how you find the continued fraction for a rational number
> (in disguise). I probably won't cover this since they do it in the
> Number Theory class.
It's almost the Euclidian algorithm: the next to last partial fraction of
a/b gives the answer.
>> There are certainly many more I don't know of (or I forgot :)).
>
> Well, that's a better list than I came up with! 8)
>
> 
>
====
Subject: Re: Applications of Continued Fractions?
>Does anyone know of any nice applications of continued fractions, other
>than:
>(1) Pell's Equation (x^2  d y^2 = k, usually k=1)
>(2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
Continued fractions are good for function computations.
For fixedprecision library programs, they are often
replaced by carefully compiled rational functions, but
if more precision is needed, they are often used. They
typically converge faster than series.
Also, using integer arithmetic, they can be used to
get accurate solutions of polynomial equations with
integer coefficients.

This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu Phone: (765)4946054 FAX: (765)4940558
====
Subject: Re: Applications of Continued Fractions?
>Does anyone know of any nice applications of continued fractions, other
>than:
(1) Pell's Equation (x^2  d y^2 = k, usually k=1)
>(2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
Reducing modular symbols to unimodular symbols. There
are a couple of people carrying on a thread of their own
about newforms and oldforms, who might be persuaded
to explain the analysis and number theory of modular
symbols, which are (like newforms and oldforms)
part of the theory of modular forms. I can give you
only a quick and unmotivated geometric description, but
I hope you'll see the beauty of the subject anyway.
Consider the (onepoint) compactification R* of the real
numbers as the boundary of the (compactified) upper half
plane in C, and within R* consider Q*, the rational points
together with the point at infinity. Identify each rational
number p/q (in lowest terms, with q > 0) with the integer
column matrix (which I will write as a row matrix) [p q],
and identify the point at infinity with [1 0]; the column
matrices so obtained are precisely the primitive integer
column matrices (i.e., the column matrices which can belong to
a basis of Z+Z) with nonnegative second row. To each
ordered pair ([p q], [r s]) of distinct primitive integer
column matrices, associate two things, one geometric and
one algebraic. The geometric one is the semicircle in
the (compactified) upper half plane that is perpendicular
to R* at p/q and r/s (if one of p/q, r/s is infinity, this
semicircle is actually a vertical half line); equivalently,
the (closure of) the geodesic in the upper half plane,
with its standard hyperbolic metric, with ideal end points
at p/q and r/s. The algebraic one is the nonsingular
matrix [p q; r s]. Either of these things is called
a modular symbol (or more precisely a universal
modular symbol...I think). Viewed as a matrix, a
modular symbol has a determinant that is a nonzero
integer; if this integer is +1 or 1, the symbol is
unimodular. The continued fraction algorithm gives
a systematic way of symbolically reducing an arbitrary
modular symbol to a formal sum of unimodular symbols;
for example, [1 0; 5 7] reduces to the formal sum
[1 0; 0 1] + [0 1; 1 2] + [1 2; 2 3] + [2 3; 5 7]
(if I've done my arithmetic correctly). If you draw
the corresponding geometric modular symbols, you
will find that you have created an ideal polygon
in the upper half plane, which gives (in an appropriate
context) a homology between a 1cocycle (somewhere)
represented by (in the example) the geometric
modular symbol corresponding to [1 0; 5 7]
and the sum of geometric unimodular symbols
corresponding to [1 0; 0 1], [0 1; 1 2], [1 2; 2 3],
and [2 3; 5 7]. Pretty!
Infinite continued fractions can also be interpreted
in this way, with some work, I believe.
Lee Rudolph
====
Subject: Re: Applications of Continued Fractions?
NntpPostingHost: apps.cwi.nl
> Does anyone know of any nice applications of continued fractions,
other
> than:
>
> (1) Pell's Equation (x^2  d y^2 = k, usually k=1)
> (2) Approximating log_b x, where b and x are integers, using only
> integer arithmetic.
>
> (3) providing good rationnal approximations of any positive real number.
Indeed, Proginoskes was a bit limited in his listings. It is a pity that
the basic English literature on this is governed by a small book by Olds,
while much of the basic work by Perron is only available in German. (If
you are able to read German, find it. As far as I know it is the best on
this subject.)
Back in the eighties I was involved with the Ada effort. There was a
question by one of the English participants how to do various functions
in fixed point arithmetic. Of course, continued fractions were the
answer. Anyhow, they could settle with 355/113 for pi in their
implementation, and also show that it was the best available in 6 digit
arithmetic.
> (4) help prove a number is irrationnal (infinite c.f.) or transcendant
(some
> properties on the partial fraction). Look for diophantine
approximation
> the web. If I remember well, an irrationnal algebraic number cannot be
> approximated too well, if it is, it's actually a transcendant number.
Well, this one is misleading (and I do not think that continued fractions
help much). Indeed, if you can show that a rational number is too
close
to the number to be approximated, the number is transcendental (Liouville,
I think). That does indeed mean that in the continued fraction the numbers
can not be too large. But that is not much help, even while 355/113 is
a pretty good approximation to pi, it does not show that pi is
transcendental.

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland,
+31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
====
Subject: Re: Applications of Continued Fractions?
Le 24062005, Dik T. Winter a .8ecritæ:
> Back in the eighties I was involved with the Ada effort. There was a
> question by one of the English participants how to do various functions
> in fixed point arithmetic. Of course, continued fractions were the
> answer. Anyhow, they could settle with 355/113 for pi in their
> implementation, and also show that it was the best available in 6 digit
> arithmetic.
Yes, but if the expansion contains quickly increasing quotients, you
already have an idea of a way for proving the transcendance and generally
using the easiest way will rest on a Liouville, Thue, Roth's approach,
by playing with this fact.
In the expansion of 'pi', you have nothing of that kind. Indeed if you
take any useful number (Liouville's numbers are not very useful) the
progression will certainly be too slow to be significant, but
you can construct transcendant numbers by using these properties :
Liouville's numbers, Champernowne number
you can also construct transcendant continued fraction by using other
properties (see Perron's book), and other constraints such as using
only bounded quotients.

Thomas Baruchel
====
Subject: Re: Applications of Continued Fractions?
Does anyone know of any nice applications of
continued fractions,
other
> > than:
> > > (1) Pell's Equation (x^2  d y^2 = k, usually k=1)
> > (2) Approximating log_b x, where b and x are integers, using only
> > integer arithmetic.
> > (3) providing good rationnal approximations of any positive real
number.
Indeed, Proginoskes was a bit limited in his listings. It is a pity that
> the basic English literature on this is governed by a small book by Olds,
> while much of the basic work by Perron is only available in German. (If
> you are able to read German, find it. As far as I know it is the best on
> this subject.)
I took German in high school and college, but I've forgotten a lot of
it. (Besides, they didn't teach me how to say things like prime
number in German in the first place.) I'm in the middle of translating
a paper by Friess (German to English), and I had to learn a lot of the
terminology. Berge's _Graphs and Hypergraphs_ was a great find, since
it lists English, French, and German translations of graph theory
terms, but I only found this out recently. Does anyone know of a
similar Rosetta Stone for Number Theory?
> Back in the eighties I was involved with the Ada effort. There was a
> question by one of the English participants how to do various functions
> in fixed point arithmetic. Of course, continued fractions were the
> answer. Anyhow, they could settle with 355/113 for pi in their
> implementation, and also show that it was the best available in 6 digit
> arithmetic.
> (4) help prove a number is irrationnal (infinite c.f.) or transcendant
(some
> > properties on the partial fraction). Look for diophantine
approximation
> > the web. If I remember well, an irrationnal algebraic number cannot be
> > approximated too well, if it is, it's actually a transcendant number.
Well, this one is misleading (and I do not think that continued fractions
> help much). Indeed, if you can show that a rational number is too
close
> to the number to be approximated, the number is transcendental
(Liouville,
> I think). That does indeed mean that in the continued fraction the
numbers
> can not be too large. But that is not much help, even while 355/113
is
> a pretty good approximation to pi, it does not show that pi is
transcendental.
Yes; you need an infinite number of them. 8)

====
Subject: Re: Applications of Continued Fractions?
>
> Does anyone know of any nice applications of continued fractions, other
> than: ....
>
> (3) providing good rational approximations of any positive real
number....
Yes. This one deserves to be better known even at a quite
elementary level. In the past I've introduced it successfully to a
firstyear class straight after the Euclidean algorithm.
Here are a couple of examples which I posted to the
alt.math.undergrad news group in a thread Representing arbitrary
fractions as rational expressions in May last year.
> As a simple example, here's how to find a simple fraction
> approximating 0.835. Of course it's exactly 167/200, but you might
> want an approximation with smaller numerator and denominator.
>
> Repeatedly taking the reciprocal and then subtracting the integer
> part, leads to
>
> 0.835 = 1/(1 + 1/(5 + 1/(16 + 1/2))).
>
> When a reasonably large number turns up, try neglecting its reciprocal.
> In this example 16 is noticeably large, so 1/16 is a lot smaller than 5.
> That suggests the approximation
>
> 1/(1 + 1/5) = 5/6, which you can see is pretty close to 0.835.
>
> A wellknown example is pi, whose infinitely continued fraction
> begins
>
> pi = 3 + 1/(7 + 1/(15 + 1/(1 + 1/(292 + ......)))).
>
> Neglecting 1/15 gives the wellknown approximation 3 + 1/7 = 22/7.
> Neglecting the smaller number 1/292 gives the much better
> approximation 3 + 1/(7 + 1/(15 + 1/1)) = 355/113.
>
> HTH
>
> Ken Pledger.
====
at UCR was about gauge theory and topology. Derek Wise
took notes on my lectures.
In the final week of the Fall quarter:
we showed how to describe the FukumaHosonoKawai topological
quantum field theories in terms of the vector space for the
circle. This is a commutative Frobenius algebra: the center
of the semisimple algebra we started with. The whole process
of going from the semisimple algebra to its center can be seen
as building a topological string theory from a topological
We wrapped up by giving a key example of a semisimple algebra:
the group algebra of a finite group.
In the next quarter, we saw how this gives a topological *gauge
theory* in 2 dimensions, called the DijkgraafWitten model. We
also categorified the whole story so far, and got theories in 3
dimensions. At the end of the last class of the Fall quarter,
we gave a sneak preview of how this works.
Notes from the whole fall quarter can be found here:
Notes from other sessions of the Quantum Gravity Seminar
are here:
http://math.ucr.edu/home/baez/QG.html
Have a fun summer!
====
Subject: Re: Solving for a Transfer Function
Paul Cardinale a .8ecrit :
> why don't i believe you? if so, then why not just define f3 by f3(h)=
> (h2/h1) h, then you would have f3(h1)=h2. this would be valid since you
> didn't specify anything further on f3.
I want f3 to be to be a function of h1, not a function of (x,y).
> I.e. given only a numerical value for h1, I want a function that yields
> h2.
Paul Cardinale
Cher Paul,
You have a strong constraint on functions f1 and f2,
f3(v) is a one variable function ,R>R bijective .
relation f2(x,y)=f3(f1(x,y)) means that f1 and f2 are iteration
powers of f3: f1=f3^[r] ,f2=f3^[r+1] or in yer case r= m(x,y) and
f1(x,y)) = f3^[m(x,y)]u(x,y)
f2(x,y)) = f3^[m(x,y)+1]u(x,y)
m(x,y) can be null,constant , reduced to m(x)or m(y) ,
u(x,y)is a bivariable function
Alain.
====
Subject: If Herc's lies were felonies, with threestrikes rule
XSpamThis: SpamCopies@YahooGroups.Com
> Every finite prefix is present. That is every sequence possible to
> infinite length is on the list.
Are you claiming that if every finite prefix is present then every
infinite sequence is also present? Or are you admitting that every
finite prefix can be present yet still there can be infinite sequences
not present?
====
Subject: If Herc's lies were felonies, with threestrikes rule
XSpamThis: SpamCopies@YahooGroups.Com
> The number set is missing *some* reals X
Which number set are you talking about? The one which was a sequence
that you claimed contained *all* the reals, i.e. the sequence you
claimed wasn't missing any reals? So are you admitting there really
were some missing, which means you told a lie when you said that
sequence wasn't missing any reals?
====
Subject: If Herc's lies were felonies, with threestrikes rule
XSpamThis: SpamCopies@YahooGroups.Com
Herc has become a contestant in a Chess tournament. In his first game
Herc is down to king and bishop, against opponent's king. Opponent
offers draw, which Herc refuses. Herc is sure he can win. After a few
CHECKMATE in the official log. His opponent calls the tournament
director over to the board, and the director feeds the official log
into a computer program, verifies the resultant position matches
between computer and physical board. Then Herc's opponent moves his
king to a spot where it's not in check, thereb showing that Herc's
CHECKMATE annotation was a lie. The tournament director calls the FBI
which sendss somebody over to place him under arrest, and after trial
he's convicted of a felony and sentenced to 10 years in prison.
While Herc is serving his term, he becomes a contestant in a Chess
tournament held among prisoners. Again he claims CHECKMATE when it's
not, and again he's charged with another felony, convicted, and now is
serving a 50 year sentence in maximum security.
Now he becomes a contestant in the Maximum Security Chess tournament,
and again he lies, says it's CHECKMATE when it isn't. Third strike,
death sentence to Herc.
Now let's switch to math: Herc is playing a game whereby he makes up a
sequence of real numbers, and his opponent tries to find another real
number that Herc had failed to include. At any point Herc can call
CHECKMATE to indicate he believes his opponent has no further legal
move, cannot find any real number overlooked by Herc's latest sequence.
At some point Herc calls CHECKMATE, and as usual his opponent uses the
diagonal argument to produce a number not in Herc's latest list, so
Herc is convicted of one felony. But because mathematicians are on good
terms with the government, Herc is allowed to continue the game before
starting his prison sentence. Herc adds that diagonal from his
opponent's latest move to his list, and calls CHECKMATE again. Again
the opponent uses the diagonal argument to produce another number which
is not only omitted from Herc's original list but omitted from his very
latest list too. Second strike. Herc adds that second antidiagonal to
his list and says CHECKMATE again. His opponent uses the diagonal
argument a third time, showing Herc has committed a third felony, three
strikes, Herc is executed on the spot. (Mathematicians are nice up to a
point, but after three blatant lies from the same person, they can
tolerate no more delays to his execution.) Nobody ever again must read
his crap on the net.
====
Subject: partial derivative
XRFC2646: Format=Flowed; Original
I have a question about partial derivative of an arctan function.
Let phi(x,y) = arctan(g(x,y), f(x,y)). Let also that at x = X and y = Y,
both f(X,Y) = 0 and g(X,Y) = 0.
If this is a case, I want to show that partial_{y} (partial_{x} phi) is not
equal to partial_{x} (partial_{y} phi) for (x,y) > (X,Y).
In general, I know that both partial derivatives should be equal.
Can anyone help me what is the requirement such that we don't have
equality?
Natanael
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
XSpamThis: SpamCopies@YahooGroups.Com
> The length of the largest THTHTH initial segment is greater than any
> natural number.
What do you mean by the largest THTHTH initial segment? What makes
you think there is any such thing? In fact there is no such largest,
and you're an idiot to think there is.
Do you think there's a largest integer? If so, you're an idiot, again.
How are you an idiot? Let me count the ways ...
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
XSpamThis: SpamCopies@YahooGroups.Com
> It is like playing chess. You play your move, and then I play mine.
> Oh, you realise, you would like to change your mind and play a
> different move instead. Fine, I say, but if you change your move, you
> have to allow me to change my move also.
Bad metaphor, because that way he can keep playing forever with no hope
of ever winning, but you can never prove that, all you can do is waste
your time responding forever. Even if he has king and bishop against
your king, insufficient material for checkmate, you can't prove the
game is a draw if you keep responding to his moves endlessly.
The correct analogy with chess is checkmate: If at some point with only
king and bishop against king he calls checkmate, you can demonstrate
that your king still has a legal move, so he lied when he called
checkmate.
Suppose he says it's not checkmate now, but in a few moves it will be,
he's not sure how many moves so he can't promise checkmate in a
particular number of moves, but he says he'll eventually be able to
checkmate you. You present a proof that he has insufficient material
for checkmate, so even if you make a mistake it'll be impossible for
him to ever claim checkmate. King and bishop are insufficient material
to ever achieve checkmate. There's a proof of that fact.
Analagously, a countable sequence is insufficient material to exhaust
the reals. Cantor proved that. If Herc says his current sequence
doesn't exhaust the reals, but some other countable sequence would
exhaust the reals, i.e. he can eventually checkmate the reals with a
countable sequence, Cantor showed any such checkmate claim is a lie, so
Herc's claim that he can eventually checkmate the reals is a lie.
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
XSpamThis: SpamCopies@YahooGroups.Com
> Sure you can disprove certain specific defined functions don't exist.
> the function that calculates halt values for every program,
Wow!! This is almost a miracle! Herc has told the truth, has stated
Turing's theorem succinctly and correctly!
Stephen MontgomerySmith caused this, so all he needs is to cause two
more miracles to achieve sainthood. :)
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
XSpamThis: SpamCopies@YahooGroups.Com
> One way to see the Turing halting theorem is this. If you give me a
> computer program which you claim can check if programs halt or not, I
> can create a program for which your program will fail.
I think in the case of troll Herc, you should be more blunt:
If you give me a computer program which you claim can check if
programs halt or not, I can demonstrate that you're lying.
Same goes for Cantor's proof: Herc, if you claim you can put the real
numbers into a correspondence with the integers, I can demonstrate that
you are lying.
If Herc weren't aware of Cantor's and Turing's proof, we might be able
to give him the benefit of doubt, that he's merely mistaken not
deliberately lying. But Herc has been told about them over and over,
had them explained many ways, so he's quite aware of them, and if he
says anything contrary to either he's flatout lying.
Unfortunately lying isn't a crime, or Herc would be in prison already.
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
XSpamThis: SpamCopies@YahooGroups.Com
> Imagine infinite people are all flipping coins infinite times.
That's an extremely vague statement. It merely says the number of
people isn't finite. It gives no idea how many in any sense the people
are other than not finite. Likewise each person flops coins some number
which is not finite. Also you give no indication of whether the two
quantities (people, and flips per person) are the same or different.
Depending on how many people and how many flips per person, the answer
to your later questions can be yes or no or not possible to determine
given the info available. If you don't answer exactly how many people
and how many flips per person, then the only answer to your questions
is insufficient information to decide.
> Can you come up with a new sequence every time?
What exactly do you mean by sequence here? No mention of sequence
occurred in your initial statement. For example, if one person flips an
uncountably infinite number of times, such as one flip for each real
number, then what sequence are you talking about??
Next you draw a diagram with integers numbering the different people,
which seems to imply a countable infinity of people. Is that what you
meant to say in the opening sentence, or not?
In that same diagram you seem to be showing a sequence of flips, and no
other flips, which makes sense only if the number of flops per person
is countable infinity. Is that what you meant to say in the opening
sentence, or not?
If both are true, that is countable number of people each flipping coin
countable number of times, with a particular sequence for those flips
specified, then the answer is yes, it is definitely possible for each
person to flip a different sequence. For example, the first person can
flip all heads, the second person can flip one tail followed by all
heads, the third person can flip tail tail then all heads, etc. Each
person has flipped a different sequence so that shows it *is* possible.
Is that what you really meant to ask in the second line of your
posting?
Now if you don't specify the sequence of flips for each person, like
the person has a countablyinfinite collection of unnumbered coins and
flips them all at once so there's no way to tell which was the first
coin and which was the second coin, so if you see the sequence H T H H
H ... and T H H H H ... those may be regarded as the same sequence,
then still the answer is yes, it's possible for the people to flip such
allatsametime sets of coins such no matter how you arrange each set
of flips they don't match any other set of flips. How you ask? That's
so much more difficult to accomplish? Use the same example as before
except instead of talking about a sequence starting with n tails and
all the rest after are heads, you merely talk about a countably
infinite set where n are tails (in no particular order with respect to
the rest) and the rest are heads. So person#1 flips all heads, person
#2 flips all heads except for one tail, person #3 flips all heads
except for two tails, etc.
Now suppose even the people aren't in an particular order, we just know
the set of people are countable. Can you still specify an example
whereby no two people flip the same quantity of heads and tails? Yes:
Pick any random person and say he flips all heads. Pick any remaining
person and say he flips all heads except one tail. Etc. I.e. for each n
nonnegative integer there is exactly one person who flips n tails and
the rest heads, while everyone is one of these ntailrestheads
flippers.
I have no idea what taking a diagonal has to do with answering the
question you asked. You seemed to have lost track of your question and
are saying random things unrelated to it. (Unless I misunderstood your
question. But the way you worded it, you seemed to be saying how I
interpreted it above. If you meant something else, please explain.) I
assumed by new sequence you mean a sequence not the same as anybody
else's sequence, and by every time you mean for each of the
flippers. Can you come up with obviously means Can you think of an
example where. So you are asking whether I can think of an example
where each person flips a sequence different from anybody else's
sequence. Is that what you meant to ask? If not, what???
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
> Every step is wrong because all reals are in the list.
That's just doesn't make any sense at all, I'm afraid...
You take an infinite plane of digits  the list of reals, each row is
> a real.
> You draw a diagonal down it.
> You call that a real.
The diagonal can be used to define a specific real number, yes.
You just can't do that.. and make any sense afterwards.
Er, yes you can.
The list contains all possible finite prefixes of digits.
Lists exist that contain all possible finite prefixes of digits, yes. Not
all lists have this property, but lets assume we are working with one which
does, e.g. T_10.
> YOU CLAIM it doesn't contain all infinite sequences.
I PROVE it doesn't contain a particular real number (the antidiagonal).
> It contains all possible digit strings to infinite length.
Does it really? You need to prove this, if this is what you believe...
In fact, to avoid future confusion (given my reading of what follows) you
should also clarify exactly what you mean by It contains all possible
digit
strings to infinite length. I've always taken this as meaning if I
propose
ANY infinite digit string (i.e. real number between 0 and 1) then the list
contains a particular entry which matches it in every single digit
position.
(Only clarify if this isn't what you mean, since I think this is what
everybody else would take it to mean...)
That means, if you consider the total infinite plane in its entirety,
> the complete digit sequence of the antidiagonal is on the list.
This obviously can't be true, as suppose (for example) the antidiagonal was
list element number 1829937. Then we see this can't be the case, as the
antidiagonal in fact does not equal this element for reasons already
discussed. Similarly for any other element number of the list. I.e. the
antidiagonal can't be on the list.
Proof:
> Assume only a finite portion of antidiag is on the list.
What exactly do you mean by this? Since you're trying to finally deduce
that the antidiag itself IS on the list, and you're arguing via reductio ad
absurdum, I'll take it that your statement is supposed to be the negation
of
this, i.e. more precisely
: Assume the antidiag does not appear anywhere on the list.
or equivalently in terms of your decimal representations...
: Assume there is no entry on the list which matches every single digit
with the antidiag
[If you mean something else, please post back and make it absolutely clear
what you mean, but be warned  if you do mean something else, then the
contradiction you finally arrive at won't prove the antidiag appears on the
list...]
So what would follow given this assumption? All we can say is every entry
on the list would fail to match the antidiag at *some* point  we could
have
an entry that fails at the 10th digit, another that fails at the 100th
digit, another at the 100000000000th digit and so on. In fact we can (and
will) find numbers that match the antidiag to increasingly longer and
longer
prefixes, without bound on the length of matching prefix.. Yet still no
entry will match ALL it's digits.
> Let that sequence length be L.
Aha, spotted the flaw straight away! What sequence length is this?
Although the antidiag does not appear in its infinite entirety on the list,
this doesn't imply there is any maximal prefix length that appears on the
list. I.e. you are assuming something which does not follow from your
initial assumption (and in fact is false).
Example to think about: with the following list:
0.7000000...
0.7700000...
0.7770000...
0.7777000...
....
the number 0.7777777... does not appear on the list, but also there is no
maximal prefix that appears on the list either! And so it is also with the
antidiagonal.
That's right! For ANY natural number n you can find an entry on the list
whose prefix of length n matches the antidiagonal. There is NO MAXIMUM n
for which this is true  yet still there is no entry on the list that
matches ALL of it's infinite sequence of digits with the antidiag!
> AL, there exists a finite prefix of length L+1 that matches the
> antidiag.
> Therefore the assumption is refuted.
As noted, you've assumed there is a maximal prefix length that matches,
which is just plain wrong.
Unless, that is, you meant something different by your initial assumption
: only a finite portion of antidiag is on the list?
If by this you meant, for example, precisely that there IS such a maximal
prefix, then your proof would be COMPLETELY CORRECT, but of course all you
end up proving is just the negation of this assumption, i.e.
: there is no maximal length prefix that is on the list
This is not at all the same as saying that the antidiagonal itself is on
the
list!
Just as with my example above, there is no maximal length prefix of
0.7777777...
that is on my example list, and yet also 0.77777777... itself is not on the
list.
Mike.
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
<42b56742$0$2386$ed2619ec@ptnnntpreader02.plus.net>
<42b579b3$0$2412$ed2619ec@ptnnntpreader02.plus.net>
<42b62ca1$0$2407$ed2619ec@ptnnntpreader02.plus.net>
<42b8c1aa$0$41895$ed2619ec@ptnnntpreader03.plus.net>
<42bb5ac9$0$41922$ed2619ec@ptnnntpreader03.plus.net>
In 500 years the stance and arguments of you both will be certificates
of a 90iq.
That doesn't mean you're stupid, by todays standards, but the nonsense
you insist will soon be recognised for what it is and the early
recorded history of usenet will be known for the remnants of mass
cultism in which you partake.
So if INFINTE people flip coins infinite times each, you can construct
an original sequence? BS
What's the width of this construction?
0.3
0.31
0.314
...
Its just function parsable notation for pi.
There's oo many digits 0..9 in every colum of a computer generated
list, you can just PICK a diagonal when you order it. I can't debate
morons any longer, like I said go ahead knock yourselves out and
pretend a = not a,a
is a gateway to joining the dots on the number line.
Herc
====
Subject: Re: The WHO CARES proof of antiantidiagonalisation
In sci.logic, HERC777
on 23 Jun 2005 18:38:07 0700
> In 500 years the stance and arguments of you both will be certificates
> of a 90iq.
That doesn't mean you're stupid, by todays standards, but the nonsense
> you insist will soon be recognised for what it is and the early
> recorded history of usenet will be known for the remnants of mass
> cultism in which you partake.
So if INFINTE people flip coins infinite times each, you can construct
> an original sequence? BS
> What's the width of this construction?
0.3
> 0.31
> 0.314
> ...
Its just function parsable notation for pi.
Formally, it's PI_n = {floor(pi * 10^(k1)) / 10^k: k > 0, k in J}.
This is of course a proper subset of
T_10 = {b / 10^k: b, k in J, 0 <= b < 10^k}.
The width of PI_n can be formally defined; it's
max(k in J)(k in J,s in PI_n) (s * 10^k in J)
or some such. It's of course +oo  or perhaps beth_0.
What this has to do with the cardinality of R, I for one don't know.
[rest snipped]

#191, ewill3@earthlink.net
It's still legal to go .sigless.
====
Subject: Re: Orlow cardinality question
>> *Sigh* all right, I'll dig up the reference...
>>
>> Your words:
>> By induction, the set size is ALWAYS the same as the maximal
>> number.
>>
>> So if you call the size of N (the set of naturals) N, then N should
>> also be the maximal element of N. However, by definition, N+1 is also
>> in N, and since N+1>N, N is no longer the maximum element of N.
> Uh huh. That's a problem isn't it? Well if you declare the size to be N
then
> you can always add another element, and get a set of size N+1 can't you?
The
> same problem exists for both maximal element AND size for the finite
naturals,
> which Cantor has resolved by falsely caliming the finite naturals
constitute
> an infinite set.
The set of naturals is /defined/ such that for each n in it, n+1 is
also in it. You don't 'add' or 'remove' elements from it, it is just
defined as the set of all such elements.
You propose that the set of integers 'stops' somewhere, notably at
'infinity'. Which is, later, defined as the size of the set. This is
a _circular_ definition! You make the set 'stop' at infinity, but you
never choose which of all your infinities you want it to stop. Is it
just some random one? Why should it stop there? Why not add R ones
together, and get the 'integer' R? You can call anything an integer
that way. Add Tony ones together, and the sum will be Tony. Wow, I've
proved Tony is an integer!
Jan
====
Subject: Re: Orlow cardinality question
On Tue, 21 Jun 2005 14:23:40 0400, Tony Orlow (aeo6)
>Martin Shobe said:
>> On Mon, 20 Jun 2005 14:21:38 0400, Tony Orlow (aeo6)
> Let's see. You don't think having the size of a set be dependent on
> the characteristics of another set isn't a problem?
>>You mean like comparing to a standard set?
>>
>> I don't have a problem comparing things to a standard set. I do have
>> a problem with a claim about size where the relative sizes of sets
>> depends on the choice of a third set. I find that *extremely*
>> counterintuitive.
>Cardinality uses all sorts of extra sets. What third set are you referring
to,
>besides the standard discrete infinity?
Well, at one point, I recall you using R as the basis for your
comparisons. You then stated that a set that had be strictly smaller
than another set, now had the same size as that other set. I find
that to be a problem.
> You don't think
> that having sets for which no size exists isn't a problem?
>>I think there may be sets which are hard to compare to standard sets,
but all
>>sets have a size. What set did I say has no size? The set of finite
naturals? I
>>don't see that as any more of a problem than having no maximal element
in your
>>finite range. Live with it.
>>
>> I don't have to. I have a perfectly servicable theory of set size
>> that doesn't have that problem.
>You mean you have a maximum element, and a size that makes sense in the
context
>of other math? That's news.
No. I mean I have a theory of set size that works even when sets
don't have maximal elements. And it makes sense in context of other
maths since almost all of the rest of mathematics can be modeled in
set theory.
> You don't
> think having sets with multiple sizes isn't a problem?
>>I don't have sets with multiple sizes. I have set expressions which can
be
>>interpreted as different sets, depending on whther you are talking about
>>quantity or symbolic representation. That's a step toward clarification,
when
>>you are using mixtures of these concepts and getting mixed results.
>>
>> Yes, you do. You assigned a single set the sizes N/2 and N.
>No, those were two different set definitions. On the one hand you removed
half
>the elements, and on the other you doubled all their values. In the second
>case, you haven't removed any elements, but doubled the range while halving
the
>density. In the first, you simply halved the density and left the range
>untouched.
Every member of the first set is a member of the second, and vice
versa. That means that those sets are actually the same set. And
you've assigned two different set sizes to it. I find that to be a
huge problem.
Martin
====
Subject: Re: Orlow cardinality question
On Thu, 23 Jun 2005 14:07:22 0400, Tony Orlow (aeo6)
>Given a set of symbols with size S, we can produce a set of all strings
using
>those symbols that have length L, and the size of this set will be S^L.
Digital
>number systems fall into this category, with S being the number base, which
is
>always finite (2 for binary, 10 for decimal). If we want to have an
infinite
>set of digital strings, therefore, S^L needs to be infinite, but S is
finite,
>so L, the length of the strings, needs to be infinite to have an infinite
set
>of such strings.
This statement is in error. You can produce an infinite set of
strings by using all the finite strings and none of the infinite ones.
Let X_S be the set of finite strings. Let a be in S. Then define
f:X_S > X_S as follows.
for all x in X_S, f(x) = xa where xa means the string x with the
symbol a appended on the right.
For all x in X_S, f(x) is in X_S, because x is a finite string,
therefore, xa is a finite string.
the empty string is not in f(X_S), because every element in f(X_S) has
the symbol a on the right end.
f is clearly one to one. f(x) = f(y) > xa = ya > x = y.
Therefore, f is a bijection from X_S to a proper subset of X_S.
Therefore, X_S is infinite.
>By the definition of digital systems, where each digit as we move left
>represents a multiple of the next higher power of the number base, any
nonzero
>digit an infinite number of positions to the left of the digital point
>represents a multiple of the number base to an infinite power, or an
infinite
>value. Since a digital number system fully utilizes all combinations of
digits
>to produce its values, most of the infinite strings in the infinite set
will
>have nonzero digits in infinite positions, and represent infinite values.
Therefore, since an infinite set of digital whole numbers requires the full
set
>of infinite strings of digits, and infinite strings of digits mostly
represent
>infinite values, most values in the infinite set of digital whole numbers
have
>infinite values.
Martin
====
Subject: Re: Orlow cardinality question
> Virgil said:
> Wrong! The finite sums are unbounded in the reals, but no infinite
sum
> is even defined.
> That doesn't mean it CAN'T be defined.
Why would anyone want to?
> Number ARE the sizes of sets.
>
> Not before numbers existed. Sets are the measure of set sizes, and
> numbers are only a convenient afterthought.
>
> Sets are the measure of set sizes? So we had set sizes, wanted to measure
> them,
> and invented sets? Uh, sure. Whatever you say.
No! TO has it backwards, as usual. We had (finite) sets of various kinds
of objects, such as herds of sheep and bags of pebbles, and wanted to
compare them, which was done by pairing off members of the two sets, to
see if they came out even or which one came out with some left over.
>
> Numbers were not set sizes before numbers existed? Well, no, I guess they
> weren't anything before they existed. When was that again?
Before TO's time.
>
>
> Sets and elements are defined in terms of each other, which is
> sufficient.
>
> But which precedes numbers.
>
> The concept of set is equivalent with the inclusion of some number of
> elements. There is nothing else to talk about in a pure abstract set.
>
> Until one has sets to measure, there are no numbers by which to measure
> them, and the numbers are only names for standardized sets.
> We have always had sets to measure, even before we called them sets.
> Sometimes
> we called them cows, sometimes apples, sometimes people or coins. Sets are
an
> abstraction of these things that we have always had around.
Right! Before sets were called sets and there was any word for numbers
people wanted to keep track of their things, like how many cows in their
herds. Though I strongly suspect that numbers were around well before
before coins were invented.
The number of animals in a herd of cows could be tallied as notches in
a stick, for example, without any numbers being used at all.
====
Subject: Re: Orlow cardinality question
> Jan de Vos said:
> So if you call the size of N (the set of naturals) N, then N should
> also be the maximal element of N. However, by definition, N+1 is
> also in N, and since N+1>N, N is no longer the maximum element of
> N.
> Uh huh. That's a problem isn't it? Well if you declare the size to be
> N then you can always add another element, and get a set of size N+1
> can't you? The same problem exists for both maximal element AND size
> for the finite naturals, which Cantor has resolved by falsely
> caliming the finite naturals constitute an infinite set.
For any initial set of naturals i.e., a set of all naturals less than or
equal to some given natural, the set size is the size indicated by the
last member of that set, so that if the size of N is indicated by any
member of N, it must be the last member, call it Last(N).
But then we can define a bijection, f, from N to N{last(N)} by
f(Last(N)) = 1, and for x < last(N), f(x) = x+1.
So we might as well take N{Last(N)} as our set of naturals to start
with.
>
> Let us call the size of N 's', for a while, to avoid clutter.
> Again, in short:
>
> N = s => s is the maximum element of N => s in N => s+1
> in N
> => s is not the maximum element of N (since s+1 > s).
>
> This is a contradiction, plain and simple. Surely even you won't
> deny this?
> I never claimed there was a maximal element to the set of finite
> naturals as a whole, just as I never claimed it has any specific
> size. However, if the maximal element or upper bound is finite
> What if there is no maximal element to the set of finite naturals?
> then the same is true of the set size, as I have shown in three different
> ways,
Does that mean that sets without maximal elements can't have set sizes?
> and if the set size is infinite, then the maximal element or
> upper bound is also infinite.
Only if there is one. TO has not shown that there need be one.
Maximal elements and upper bounds are not the same thing. A maximal
element to an ordered set, if it exists, must be a member of the set
itself, but an upper bound need not be a member of the set itself, and
often is not. There can only be one maximal element to a set, but there
can be infinitely many upper bounds, if none of them are members of the
set itself.
>
> Note that I only used your theorem and the definition of the
> natural numbers. That means that your theorem /has to be false/.
> Which, whooptydoo, means that your 'induction' really can't prove
> anything for 'infinite numbers' or infinite sets.
>
> Uh, no, as usual you applied YOUR theorem that every finite set MUST
> have a maximal element
Only if the set is a totally ordered set, but for such sets, if there is
no injection from the set into any proper subset then it MUST have a
maximal, and minimal member.
Suppose that we have a nonempty set S, with no maximal member, we can
pick from it a sequence of ever larger members without ever getting to a
maximum member, since there isn't one. But the mapping that takes each
choice to the next choice but leaves everything else fixed maps the
original set injectively into a proper subset, so this set without a
maximal member is infinite.
Since sets without maximal members must be infinite, by contraposition,
a nonempty finite ordered set must have a maximum.
====
Subject: Re: Orlow cardinality question
<87k6ko3i5t.fsf@phiwumbda.org>
understand it, that bijection if created by formign a linear order via
diagonal
> traversal of the grid of rationals. Can you provide a formulaic bijection
Can you provide a definition of a formulaic bijection? Can you
provide the title of a book on set theory that discusses the difference
between formulaic bijections and nonformulaic bijections?
Once again: are you claiming that what you say is part of a body of
shared knowledge outside usenet.sci.math, in which case where is this
body of knowledge shared, or are you claiming that you made it all up
yourself, in which case if you claim it is correct mathematics, why do
you imagine you have still not won a Fields medal?
Brian Chandler
http://imaginatorium.org
====
Subject: Re: Orlow cardinality question
On 23 Jun 2005 08:52:59 0700, Randy Poe The one useful thing that I got out of this discussion is that I went
>> over the ZF(C) axioms and the Peano construction etc... So I find it
>> interesting to explicitly state and think about the distinctions.
>> As stated, it still seems (or it would seem to a naive set theorist)
>> that since you can construct all the finite sets {1,...,n} then you
>> should be somehow able to union them together and get N. However my
>> understanding of the ZF axioms is then that well, we can't quite union
>> them since we can only union over a set, and unless we have the set N,
>> then we can't union all those finite sets (which are indexed by
>> elements of N) to get N, and if we already had N, why would we need to
>> union things together.
I'm not sure what you're saying. We certainly can talk
>about countable unions of sets, and even uncountable
>unions of sets. The existence of those things does
>not require that they be constructed in finite time.
If I understand him right, he is saying that in ZF  the axiom of
infinity, you can't construct any infinite sets. Power set wont get
you there, cause the power set of an infinite set is still finite.
Sum won't get you there because the union of a finite collection of
finite sets is still finite. Etc.
>I'm not really up on ZF/ZFC set theory, but I think
>that I'm relying on the Axiom of Choice. That is, when
>I say that an argument of the form
1. Let x be an arbitrary element of set S.
> 2. Property A is true of x.
> 3. Therefore A is true for all x in S.
is valid, I have relied on the Axiom of Choice for
>this to be valid for a set which is not enumerable.
That would be universal instantiation. No choice necessary.
>The existence of countable union S = U{S_k, k in N}
>just says I have a set {x : x in S_k for some k}.
>If I have a way of establishing for an arbitrary x
>that x is in S_k for some k, or x is not in any S_k,
>then I have a well defined countable union. And in
>the same way I can create well defined uncountable
>unions.
Again, though I am not that familiar with the founding
>axioms of set theory, I think it is the Axiom of Choice
>I am relying on here.
No choice required here either.
Choice is more along the lines of,
For every x in R, let A_x be a nonempty subset of R.
Then there exists a function f:R > R such that for all x in R, f(x)
is in A_x.
The existence of that function is an instance of the axiom of choice.
Martin
====
Subject: Re: Orlow cardinality question
<85k6ko518n.fsf@lola.goethe.zz>
Since there is a set of finite naturals (see N* above) with all the
> properties of a set of naturals that mathematicians need or want, TO's
> extras are, at best, irrelevant.
> At worst, they are irrelevant to your interests. I asked what they break.
I
> guess the answer is nothing.
Indeed.
I have already responded to this...
(imaginator...@despammed.com Jun 23, 12:44 pm)
> Eat me, Jiri. There is nothing in standard math that will suffer
from infinite
> numbers, except your own delusions.
An authoritativesounding statement. Well, actually, group theory (and
the rest of algebra) would be in fairly significant trouble if your
version of things were correct. But despite your pompous tone, by your
own admission you haven't a clue what group theory is, which makes it
strange you can be so confident.
I mean, do the Tints form a cyclic group under addition? And do the
finite Tints form a, er, what, fuzzy subgroup, perhaps???
(Or an indeterminate subgroup? Why can't I find any reference to
these problems in algebra textbooks? Why haven't you won a Fields medal
yet?)
Brian Chandler
http://imaginatorium.org
====
Subject: Re: Orlow cardinality question
<85mzpqta8t.fsf@lola.goethe.zz>
<85oea13pc1.fsf@lola.goethe.zz>
> The set of natural numbers is not of the type has a maximal
element.
> > > Yeah yeah I know, your favorite mantra: largest finite. largest
finite....
> Ommmmm......
> > For each finite n in N, let n* denote the set of all m in n with m
<=
> n. e.g., 3* = {1,2,3}
> > Let the union of all these n* be denoted N*
> > CLAIM: N* is a set of finite naturals with no maximal member.
> > PROOF: Suppose that N* contains a maximal element, say m.
> Since N* is a union on n*'s,we must have m in n* for some n*.
> T en m is maximal in n* too, and we must have m = n,
> But n is finite so n+1 is also finite.
> Then n+1 is in N*, and m < n+1.
> Thus if m IS maximal, it is NOT maximal.
> This contradiction arises by assuming N* has a maximal member,
> Therefore N* cannot have any maximal member, QED.
> > So now we have a set of finite naturals with no maximal member that we
> can (and do) use as OUR set of naturals regardless of what sort of set
> of naturals TO wants to use.
> Sure, that's fine. It all makes sense. Just don't claim it's an infinite
set.
> The contradiction is in calling this an infinite set, when you need
infinite
> whole numbers to have an infinite set of distinct whole numbers. This is a
fine
> example of an indeterminate set.
In normal mathematics, I don't suppose an indeterminate set is
defined, but if the expression were used I would expect it to mean an
illdefined set  one where membership of the set is not clearly
defined. For example The set of all uninteresting natural numbers
(which leads to a silly 'proof' that the set is empty). You say this is
a fine example of an indeterminate set, but you would surely not
suggest it is indeterminate in the same sense as the set of
uninteresting numbers? Virgil's set is quite clearly defined.
Plainly (to everyone except you) in normal mathematical terminology the
set N* is an infinite set. Equally plainly, using the words finite
and infinite with you is always going to be a waste of time, since
you have too many misconceptions about them. So I will use different
words. (Oh dear, I see that below I'm still using 'natural number' to
mean the normal mathematical ones not the Orlovian ones; sorry, you'll
have to lump it.)
Definition: a ditty is a sequence of words taken from the set {nought,
one, two, three, four, five, six, seven, eight, nine, ping}, with the
constraint that 'nought' never appears at the beginning or following
'ping'.
Definition: a numname is a ditty not including 'ping'.
(Slightly informally) We generate the numname for a natural number by
writing the number in normal decimal notation (no leading zeros), then
reading the names of the digits from the above set in sequence from
left to right. E.g. the numname of 2^10, 1024, is
onenoughttwofour.
Definition: a ditty is said to be singable if (given sufficient time
or speed) it can be performed  that is, said aloud  and greeted with
applause at the end. No applause, and it is not singable; no end, and
there can be no applause.
Definition: a ditty that is not singable is said to be miffy.
I hope you agree that every pofnat has a numname that is a singable
ditty?
Now we associate a ditty with any set of natural numbers, by arranging
the numbers in ascending order thus:
{ n0, n1, n2,... (np)} (there may be a last one np or not; that's
undefined at this point)
We create the ditty by concatenating the numnames for n0, n1, etc,
inserting 'ping' for every comma. I'll call this a set ditty. I think
it's fairly obvious that there is a 11 mapping between sets of pofnats
and set ditties, OK?
Now we will call a *set* singable if its set ditty is singable, and
call the *set* miffy if its set ditty is miffy. So please answer the
following question:
Is Virgil's set N* singable or miffy? Note that it must be one or the
other, because miffy just means not singable.
Brian Chandler
http://imaginatorium.org.yes.I.know.it's.hopeless
====
Subject: Re: Orlow cardinality question
> Virgil said:
> In this case the sets are defined
> uniquely by the natural number. Sorry.
>
> Wrong! Von Neumann defined those standard sets, from which we can
> derive natural numbers, very nicely without presuming any natural
> numbers existed at all.
> Unbang your head already. I made a statement about sets of naturals
beginning
> with 1, and you refer to an example beginning in 0 as a counterexample.
So is TO's head so banged up he can't figure out what comes after 0?
In the case of Peano, or von Neumann, or any other construction of a
set of natural numbers, the names come after the constructions. There
is no point in having names for what does not yet exist.
>
>
> I was asking because I have no idea what TO's unit infinities are
nor
> what distinguishes either of TO's unit infinities from the other.
> You don't know the difference between the number of naturals and the
> number of reals, the discrete and continuous infinities, N and R?
> Hmmmm.....
Since this is the first time I have seen either of them called a unit
infinity and TO is notoriously sloppy with his terminology, I was not
at all sure that was what HE meant. Now that he has implied that he
meant the cardinalities of N and R, I know how they differ. I wonder
whether TO does.
>
>
>
> Induction only describes what is true for MEMBERS of the set
> of all
> naturals, it says nothing about the set itself.
> It is true that, for every n in N, the set of naturals from 1 to
> n has n as a maximal element and also as a set size.
>
> As a representative of set size.
>
> Meaning that there is a standard set, sometimes named by its last
> element, which is the standard set for finite cardinality.
>
> Which is SOOOO different from infinite cardinality.
>
> Actually not. Read von Neumann's treatment of the natural numbers.
> I was being sarcastic. (sigh)
> Anyway, got a good link to what you're referring to?
TO appears to have no more talent for sarcasm than for mathematics.
>
====
Subject: Re: Orlow cardinality question
XAbuseNotes: Abuse reports must be submited via the usenetabuse.com portal
listed above.
XAbuseNotes2: Reports sent via any other method will not be processed.
> Virgil said:
>
>>There isn't a maximal value in N. So (nonexistent) 'it' is not the
>>size of the set.
>>Brian Chandler http://imaginatorium.org
>>Whatever size the set is, that value is a value of an element in the
>set. You really can't deny this constant equality between the range
>of element values and the size of the set. You are in denial.
>>Note that for standard mathematics N has no maximal member.
>>If for each finite n in N, I_n = {m e N: m <= n} represents the set
>>of all initial segments of naturals, then it is trivial that
>>N = UNION_{n e N} I_n.
>>I.e., N is the union of all its initial segments.
>>TO is arguing that what holds for each if those initial segments must
>>also hold for the union of all of them, namely that the union must
>>contain a maximal member.
>
> The argument is that the set size is the value of an element in the set.
>
>>But for any object to be in a union of sets, it must be in at least one
>>of them, so that TO's argument requires that some initial segment, which
>>is a proper subset of N, must contain a value larger than every other
>>is larger than its successor, and other equally selfcontradictory
>>statements.
>
> The only contradiction is introduced when you declare this set of finite
> naturals to be infinite, as I have shown.
>
>>
>Other
>times people complain that I am talking about properties of a set.
>This is also an empty objection, since I am talking about the
>property of a number, such that a set uniquely defined by that number
>has that number both as its size and maximal element, or element
>range (plus 1).
>>
>>That, as a definition of number, is unsatisfactory and incomplete.
>
> That is not a definition of a number, but a property of a natural number,
which
> can be proven to hold for the entire set N, through inductive proof.
>
>>
This can only be true if the set N is itself a natural number. Is it?
(Note: I specifically mean the *set* N, not the N you refer to as the
size of that set.)
Matt
====
Subject: Re: Orlow cardinality question
> Virgil said:
>
> Virgil said:
>
> I have no idea what you are talking about. Once you pick your
> axioms, you have to use those axioms. You cannot just use any
> axiom in your proof.
> Why not? If axioms are all universally true, then I can.
>
> Must axioms, not being about any physical world, ever be
> universally true.
>
> Given any axiom, there is nothing to prevent us from investigating
> a system in which that axiom is false.
>
> And such investigations have occasionally proved quite fruitful.
>
> For example,the parallel postulate, which was for a long time
> believed by everyone to be universally true for the real world,
> and therefore universally true.
>
> Sure. Contradiction is one way of testing the interrelationships
> between axioms, and the level of their universality. When we find an
> axiom can be false and still result in a working system, it should
> bocome generalized to apply universally, or qualified as to the limit
> of its applicability.
>
> Can TO find an example of any axiom which cannot be false in any working
> system? That is, an axiom that is actually used curently in some actual
> system whose negation in any system whatsoever makes that system
> selfcontradictory?
>
> Only when he can do this does TO's plan even begin to make any sense,
> and even then, not much.
>
> The addition, multiplication or exponentiation of two finite numbers is
> finite.
> Universal enough for you?
In the field of three elements, 3+3 is not even defined, much less
possible, although for any member x of that field, 3x + 3x and 3x * 3x
are defineaable and possible and equal to 0x.
====
Subject: Re: Orlow cardinality question
> Virgil said:
> For any countable set, for which some lising is known to exist, it is
> quite sufficient to have a listing of that set, in any order and
> regardless of whether that listing corresponds to any other sort of
> ordering on that set, to create a formula based on position in that
> listing. Therefore, such functions depend only on the existence of SOME
> listing of members, and not on any particular listing.
>
> I would like to see your formula that maps the naturals to the rationals.
As
> I
> understand it, that bijection if created by formign a linear order via
> diagonal
> traversal of the grid of rationals. Can you provide a formulaic bijection
I can easily inject the rationals into the naturals, which I will do
below. A bijection is a bit more difficult,and I do not have the details
handy right at the moment.
For an injection from Q into (but not onto) N:
Each q in Q is uniquely either 0, or m/n, or m/n, where m and n are
natural numbers with no common factor other than 1.
Then F:Q > N is definable, where ^ indates raising to a power, by
F(0) = 1 and
F(m/n) = (2^m)(3^m) and
F(m/n) = (5^m)(3^n)
====
Subject: Re: Orlow cardinality question
> Virgil said:
> For each finite natural, n in N, let n* represent the set of naturals up
> to and including that value, so that, for example, 3* = {1,2,3}, and for
> all finite n in N, Card(n*) = n.
>
> Now let N* be the union of these n*'s for all finite n in N.
>
> Then N* is an infinite set of finite naturals, such as TO claims
> cannot exist.
>
> I simply stated, and proved, that such a set is finite.
And I simply stated and proved it to be not finite:
Successor: N* > N* injects N* into a proper subset of itself.
Ergo, N* is not finite. QED.
So TO's proof is garbage.
====
Subject: Re: Orlow cardinality question
> Virgil said:
> Since there is a set of finite naturals (see N* above) with all the
> properties of a set of naturals that mathematicians need or want, TO's
> extras are, at best, irrelevant.
> At worst, they are irrelevant to your interests. I asked what they break.
I
> guess the answer is nothing.
TO's infinite naturals break the rule that the set of naturals is the
SMALLEST set containing its first element and containing the successor
of each of its elements. TO's elements are extras, but that is not
allowed.
====
Subject: Re: Orlow cardinality question
> But as it is, he asked a simple question: What's wrong
> with infinite integers? and you gave a clear and concise answer: By
> induction, it is trivial to prove that every natural number is finite.
>
>
> I have pointed out the flaw in that inductive proof several times
As we have pointed flaws in you attempts to point out a flaw.
====
Subject: Re: Orlow cardinality question
> Virgil said:
>
> largest finite. Ommmmmm largest finite..... (shakes rattle and
drinks
> snake
> blood)
>
> Whatever you can say about my maximal element, I can say about your
set
> size.
> So, you might want to watch your tongue.
>
> Remember, I don't have a problem with N+1 anyway.
> 
>
> Define N* as the union of sets of form {1,2,3,...,n} for finite n.
> then one can easily prove that N*, all of whose members are finite
> naturals, has no largest member.
>
> I believe you. This is not wrong. there is no largest finite.
If there is no largest finite natural in N*, then mappping f(x) = x+1
injects N* to a proper subset of itself, which means that N* is an
infinite set.
====
Subject: Re: Orlow cardinality question
> Virgil said:
>
> The set of natural numbers is not of the type has a maximal
element.
>
>
> Yeah yeah I know, your favorite mantra: largest finite. largest
> finite....
> Ommmmm......
>
>
> For each finite n in N, let n* denote the set of all m in n with m
<=
> n. e.g., 3* = {1,2,3}
>
> Let the union of all these n* be denoted N*
>
> CLAIM: N* is a set of finite naturals with no maximal member.
>
> PROOF: Suppose that N* contains a maximal element, say m.
> Since N* is a union on n*'s,we must have m in n* for some n*.
> T en m is maximal in n* too, and we must have m = n,
> But n is finite so n+1 is also finite.
> Then n+1 is in N*, and m < n+1.
> Thus if m IS maximal, it is NOT maximal.
> This contradiction arises by assuming N* has a maximal member,
> Therefore N* cannot have any maximal member, QED.
>
> So now we have a set of finite naturals with no maximal member that we
> can (and do) use as OUR set of naturals regardless of what sort of set
> of naturals TO wants to use.
>
> Sure, that's fine. It all makes sense. Just don't claim it's an infinite
set.
Does or does not the mapping f(x) = x+1 map our set of finite naturals
into a proper subset of itself?
If it does, then ourN is an infinite set.
If TO wishes to claim it does not, then TO is obligated to say for which
x in ourN x+1 fails to be in ourN. Since TO cannot do that, he is wrong
again!
> The contradiction is in calling this an infinite set, when you need
> infinite whole numbers to have an infinite set of distinct whole
> numbers. This is a fine example of an indeterminate set.
There is nothing indeterminate about the bijection from ourN to
ourNlessitsfirstmember defined by x > x+1.
====
Subject: Re: Orlow cardinality question
> stephen@nomail.com said:
> First, we need to define finite and infinite.
>
> A set X is infinite if there exists a bijection from X to a proper
> subset of X.
>
> A set is finite if it is not infinite.
Alternately, a set is finite if for any ordering of its members every
nonempty subset, including the set itself, has a greatest or last
member (this also implies existence of a first member since the reverse
ordering has a last member).
Then a set is infinite if it is not finite, i.e., it can be ordered so
that some nonempty subset does not have a last member (and therefore
the set itself does not have a last member).
The above definition of finiteness can be used to show that a set is
finite if and only if it has no injection into any proper subset, so
that it is a mere matter of convenience which definitions one chooses to
use.
>
> We now need to describe the natural numbers in terms of sets.
> First, let S(x) = { x + {x}}, where + is the union operator.
That should be S(x) = x + {x}, so that S(x) will contain all the
members of x and one new object, x itself.
> We define N, the set of natural numbers, recursively:
> 1. {} is in N
> 2. if n is in N, then S(n) is in N
> 3. nothing else is in N
Instead of condition 3, one might equally well say that N is the
intersection of all sets satisfying conditions 1 and 2.
>
> We can now prove that each n in N is finite using induction.
>
> Base Case. {} is finite. {} has no proper subsets, so there
> cannot exist a bijection from {} to one of its proper subsets.
>
> Inductive Step. If n is finite, then S(n) is finite. Assume that
> S(n) is infinite. This means there exists a bijection f from S(n)
> to some subset V of S(n). We can assume that V does not contain
> {n}. Given f we can construct a bijection from n to a proper
> subset of n. Simply remove f({n}) from f, and we know have a
> bijective function from n to V(f{n}). This means n is infinite,
> which contradicts the inductive hypothesis.
>
> Therefore, each n in N is finite.
>
> N itself is infinite because S(x) is a bijection from N to a proper
> subset of itself, namely N{{}}.
>
> So which part of this proof do you not accept?
>
> Stephen
>
> Base Case. {} is finite. {} has no proper subsets, so there
> cannot exist a bijection from {} to one of its proper subsets.
>
> Inductive Step. If n is finite, then S(n) is finite.
>
>
>
>
====
>
>
>
>
>
>
>
> @newsstand.cit.cornell.edu >
> Within the world of mathematics, one can compare results from
> different areas for consistency. That's the immediate environemtn of
> cardinality. Mathematics exists within a larger world of science and
> logic, and should agree with those larger areas, which in turn should
> agree with observed phenomena in the world. It's a matter of levels
> of abstraction from concrete reality.
The world of imagination, in which numbers exist if they are to exist
at all, is not constrained by the need to correspond to any physical
reality at any level of abstraction.
>
>
> Maybe I am too philosophical for this group, but math and science
> have grown out of philosophy in the past, so maybe I'm just what you
> need.
Not hardly!
>
> But you want to limit everyone else by what you can't fathom.
> What is it I can't fathom?
The way that mathematics works, and has to work in order to be workable.
> I have never seen anyone use set theory to draw a conclusion about
> numbers and strings and trees that violate their properties,
> other than you and some other cranks. You are the one who insists
> that for some finite n, n+1 is infinite.
> No, I never claimed that
Consider the definitions of finite and infinite above. TO claims that
the ordered set of finite naturals is finite but has no largest
element. These two claims are mutually exclusive. So that by claimimg
that the set of finite naturals has no largest member, he is admitting
that it is not a finite set even while he is claiming that it is a
finite set.
>
> I would agree that a set IS the elements in the set. But what does
> that have to do with anything? We can still look at properties of
> the set that do not depend on all the properties of the elements of
> the set. It is known as abstraction, and it is a useful skill in
> mathematics and in the real world.
>
> But, there is nothing you can say about the size of an infinite set
> without looking at the properties of the elements. A set is simply a
> collection of things, so with that simple abstract definition, there
> is really no other measure of the set than that number. What other
> properties of a finite set can you discuss without reference to
> properties of the elements? How can you even discuss relative
> infinite set sizes without reference to those properties?
>
> Incorrect. Convergent series have a definite finite sum.
Only in the sense that there is a limit to the sequence of finite
partial sums, but this limit is not an element of that sequence.
>
> There are no infinite natural numbers. Every natural number is
> finite, and every natural number has a finite successor.
> mantra...mantra....
if so it is a mantra that leads to truth, unlike TO's mantras.
>
> Part of your problem is that you do not even have a working
> definition of finite or infinite. Elsewhere you said that a
> number is finite if it is smaller than every infinite value. What
> then is your definition of infinite?
> Without end or bound. What's your general definiton of infinite?
Only defined for sets, and not otherwise, and for a suitable definition,
see above.
> Does {{}} really contain any natural numbers
It does by definition!
====
Subject: Re: Orlow cardinality question
> stephen@nomail.com said:
> stephen@nomail.com said:
>>
>> Hmmm, I must be bored this morning, I'm replying to Tony again.
>>
>> There seems to be a lot of that going around. :)
>>
>> Stephen
>>
> Yeah very interesting.....Do cranks usually get this much attention?
>
> Yep. We have been through all this before. You should
>
> I doubt very many people are really taking you seriously,
> given how absurd your statements are (e.g. 2/2 != 1).
>
> Stephen
>
> I never said the value was different, quantitatively, but there is
certainly
> a
> difference between a whole uncut pie, and two halves of a pie, so they can
> represent slightly different ideas, similarly to 0.111111.... and
1.00000....
>
> Just because what I say isn't what you;'re used to doesn't make it wrong.
It
> just makes it strange. C'est la vie!
There is a difference between the symbols 2/4 and 3/6, too, and between
the different basal representations of all but finitely many naturals,
but when one is talking values, as equals or notequals signs would
indicate, then the representations are not relevant.