[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[sdpd] Re: UPPW-5 solution - UPPW-6 problem
hello to all
First I would like to praise Armel for bringing indexing into the
spotlight. Now I would like to comment on Armel's statement "Have the
most recent indexing software outmatched the old established ones ?
Perhaps, hard to say".
I don't think that these examples are going to show up progress on
whether new algorithms succeed; they do however show when they fail. I
do enjoy the challenge so don't stop Armel. The reason for being
pessimistic is the fact that if a new method does find a correct
solution to a difficult unknown then who is to know if the correct
solution was indeed found. As much as I think that "real" data is
necessary for testing methods I do think that there is no substitute for
simulated test data where the solutions are known. There is also no
substitute for understanding the methods rather than trusting their
implementations.
UPPW-5 is a case where powder data does not yield a unique solution.
When multiple solutions yield similar "perfect" Pawley/Le Bail fits with
similar de Wolff values then it is not a matter of failure of the
programs/methods but rather a failure of the data to yield a unique
solution. In my view it is therefore not possible for any indexing
method to resolve the ambiguity. This is not to say however that the
door should be closed to new methods. The way forward is to go
backwards. Back tracking could mean recollecting the data on a higher
resolution instrument (ie. Peter Stephens), annealing the sample or
trying some SEM/TEM analysis. If all this fails then it is really a
matter of trying structure solution for each of the possible lattice
parameters.
Excuse the long mail but while I am at it I would like to correct a
misconception regarding the idea of an "exhaustive" search and in the
process state the reason why I developed an indexing algorithm. I have
heard the term "exhaustive" used so much in indexing that I am beginning
to believe that I got something wrong. So please enlighten me if you can.
On data with small 2Th errors then a method can claim to be exhaustive.
However, on data with large errors due to say peak overlap on a dominant
zone problem then the term "exhaustive" looses meaning. The successive
dichotomy method, a stroke of genius by Daniel Louër to use it, is often
regarded as being exhaustive. For data with large errors the delta-2Th
values would need to be set large for the dichotomy method to proceed to
the correct solution. If the delta-2Th were indeed set large enough then
there would be many solution ranges returned (note I am defining a
solution range as a solution with +- delta-2Th). Thus sure enough the
solution range would be there but the correct range would be impossible
to define. If the correct range could somehow be identified then in my
view an iterative least squares estimate between the observed and
calculated d-spacings is the best solution choice within a particular
range. Note multiple Palwey/Le Bail fits would not be feasible if the
delta-2Th were large; this brings me to my own algorithm (dare I say its
Topas) which returns iterative least squares solutions. Now having said
that no method is going to resolve ambiguity, it is my opinion that a
combined ITO and DICVOL probably solves more than 90% of every thing
thrown at it. Thus new algorithms are only filling a small gap and to
find this gap is presumably what UPPW is all about - or is it? If not
then it's a lot of fun in any case.
cheers
alan
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Buy Ink Cartridges or Refill Kits for your HP, Epson, Canon or Lexmark
Printer at MyInks.com. Free s/h on orders $50 or more to the US & Canada.
http://www.c1tracking.com/l.asp?cid=5511
http://us.click.yahoo.com/mOAaAA/3exGAA/qnsNAA/UIYolB/TM
---------------------------------------------------------------------~->
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/