[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [sdpd] Seen at Geneva - Call for reviews
> There is a third possibility, which is refining against a pseudo
> powder pattern rebuilt from the peak positions and intensities, jist
> like it is done in the pseudo-direct-space program ESPOIR for structure
> solution. The advantage here would be to have the possibility to
> enlarge the peak widths in order to find more easily the global
> minimum. ESPOIR could be easily modified (by me) for indexing that way,
> because it applies already that pseudo-powder pattern concept together
> with Monte Carlo and pseudo simulated annealing..
This is an interesting and ingenious suggestion, but may have an inherent
problem.
It is certainly the case that one of the major difficulties of powder
indexing, and one that is especially challenging for any global
optimisation approach, is the very small convergence radius that is
often exhibited for trial indexing solutions in solution space.
In that sense, any method for smoothing and broadening the merit surface
in solution space looks attractive.
Unfortunately, the method proposed appears to be equivalent (or very
similar) to degrading the resolution of the observed pattern, which is
well known to make patterns unindexable by merging well-resolved lines
and clusters of lines into generalised humps and ridges, and so moving
their profile maxima to different and incorrect 2Thetas.
The inference is that such pseudo-broadening would not simply smooth the
topography and increase the convergence radius, but also make qualitative
changes so that the resulting solution-space maxima are not simply
broader but moved to quite different locations.
Thus, if these suspicions are correct, merely extracting peaks from a
broadened pattern would not be sufficient, since they would not lead to
the same merit-surface maxima.
This does not mean that the proposed method is without merit, but it
suggests a possible need for further elaboration. Thus perhaps by
iteratively refining, while simultaneously applying progressive
de-broadening, the solutions could be tracked back to their correct
locations.
Whether this would work in practice would be a matter for experiment,
which I would encourage you to try.
> Moreover, applying the "brute force" (trying all possibilities
> following a grid step) may not need prohibitive time if enlarging the
> peak widths allows to increase the grid step (say for instance a step
> of 0.03 or 0.02 Angstrom instead of 0.01 or 0.005).
In discussing the increases in computer speed needed for exhaustive
searches, I had in mind the calculations I made back in the 1970s
concerning the time that a CDC 7600 would need for such searches. This
was about the fastest scalar processor of its day - roughly equivalent
to a 500MHz to 1GHz PC, and no slouch even now.
My estimate of a speed-up factor of approx x10000 needed before
exhaustive searches would become practicable for general symmetry was
based on my 1975 estimate of c.4 years for a CDC 7600 to complete a
binary (dichotomy) search in the triclinic case.
Binary search is theoretically the optimal exhaustive search method in
parameter space, but requires hard (definite) criteria for excluding
successive partitions of the search domain, such as the +- 2Theta bounds
on each observed line used by DICVOL and LZON.
Unfortunately, as far as I know, no equivalent hard criteria are
available for merit-surface methods, which is one reason why MMAP uses
the much less efficient method of step-search.
My corresponding estimate for a 7600 to make an exhaustive step-search
of indexing solution space was not 4 years but 4000 years!
Robin Shirley
------------------------------------------------
To: sdpd...@yahoogroups.com
From: Armel Le Bail <alb...@cristal.org>
Date: Tue, 03 Sep 2002 09:31:42 +0200
Subject: Re: [sdpd] Seen at Geneva - Call for reviews
Reply-to: sdpd...@yahoogroups.com
>3) Bruker presented 2 programs which took a quite different approach by
>using Monte Carlo trials followed by rapid refinement and assessment.
>One of them actually made its refinements directly against the observed
>profile rather than peak positions - this is a bold move and it will be
>interesting to see whether it succeeds.
I think that neither refinements directly against the observed profile or
against the peak positions are the best approach.
There is a third possibility, which is refining against a pseudo powder
pattern rebuilt from the peak positions and intensities, jist like it is
done in the pseudo-direct-space program ESPOIR for structure
solution. The advantage here would be to have the possibility to
enlarge the peak widths in order to find more easily the global minimum.
ESPOIR could be easily modified (by me) for indexing that way,
because it applies already that pseudo-powder pattern concept
together with Monte Carlo and pseudo simulated annealing..
Moreover, applying the "brute force" (trying all possibilities
following a grid step) may not need prohibitive time if enlarging
the peak widths allows to increase the grid step (say for instance
a step of 0.03 or 0.02 Angstrom instead of 0.01 or 0.005). The main
time problem is to find it for writing the program ;-).
Best,
Armel
PS - Evident prohibitive times :
- Testing all possible cubic cells from 2 to 52 Angstroms by step
of 0.001 corresponds to 50000 events, and current processors may
allow to test that in less than one second (tests being made on
Rp of a small pseudo-powder pattern corresponding to the 20 first
peaks).
- Finding the 2 parameters for hexagonal/trigonal and tetragonal
would need already 50000 more time. Reducing the step to 0.01 or
to 0.03 angstroms makes a difference (respectively 500 or 55 times more,
between 1 and 10 minutes).
- Orthorhombic starts to pose a problem. Then, monoclinic and triclinic,
with a systematic 0.03 grid step to explore 2-32 angstroms (1000 steps),
and a 0.04° step to explore 90-130° (1000 steps), would correspond to
231 days and 633000 years of calculation, respectively (if 3000000 steps
are tested per minute) - not taking accound of redundant cells.
But Monte Carlo random steps are expected to reduce these times a lot.
Making a very simple indexing program following these recommendations
is straightforward. Is it worth doing it when processors are close to 3GHz is
another story.
The test case in the Kariuki et al. paper (J. Synchrotron Rad. 6, 1999, 87-92)
was orthorhombic with cell parameters close to 5 Angstroms, using
a genetic algorithm.
Using high quality synchrotron raw data clearly leads to decreasing the
grid step. We need the opposite, enlarging the grid step, and this is
obtained by enlarging the peak width.
Insensitivity to <10% impurities would be there. But I think that
indexing a 2-unkown-phase mixture at 50-50% would be obviously
quite difficult on the Rp test basis.
Comments ?
------------------------ Yahoo! Groups Sponsor ---------------------~-->
Looking for a more powerful website? Try GeoCities for $8.95 per month.
Register your domain name (http://your-name.com). More storage! No ads!
http://geocities.yahoo.com/ps/info
http://us.click.yahoo.com/aHOo4D/KJoEAA/MVfIAA/UIYolB/TM
---------------------------------------------------------------------~->
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/