From: James Holton
Date: 28 October 2011 20:19
Changed the subject line to try and keep the original thread on-topic.
George is right that the acceleration/decelleration introduces error, but only in what I like to call "standing start" mode. These days I think most facilities use "running starts" where the spindle is brought up to the degrees/second required by the the exposure just before opening the shutter at the desired "starting phi" position, and then closing the shutter at the "ending phi" with the sample still moving at full speed, allowing it to decelerate afterward. The "overhead" for achieving a stable velocity depends on the speed, of course, but this "prep time" is generally not more than a few hundred milliseconds. This is NOT the same thing as the shutter jitter! In fact, the "opening time" of the shutter is also not the jitter. Even if the shutter takes 100 ms to "open", as long as that is reproducible, it won't hurt the data quality. Shutter jitter is the rms deviation of all the "true opening times" from the average. I have not measured this on any beamline other than my own, but here the shutter jitter is rms ~0.6 millisecond, which translates to 0.0006 degrees at 1 degree/s. Assuming a mosaic spread of 0.5 degrees, this introduces an error as large as 0.3%. I say "as large as" because only partials are affected by shutter jitter, not fulls.
All that said, however, if Rmerge is 3%, then sqrt(3%^2-0.3%^2)= 2.985% error is due to something other than "the shutter".
We thought ourselves quite clever here at ALS 10 years ago when we implemented the running-starts for exposures on these new-fangled (at the time) "air bearing" spindles. Only to find that they didn't really make all that much difference when compared side-by-side with standing starts. Unless, of course, we started doing REALLY short exposures, like 0.08s or less. Then the noise power spectrum of the beam (flicker noise) starts to become important. This is actually easier to measure than you might think: just put up a photodiode and record samples of the "beam intensity" as fast as you can. The rms variation in the recorded intensity, divided by the square root of the sampling rate is the "flicker noise". I typically get 0.15%/sqrt(Hz), which means that the average variation in integrated beam intensity over a 1s exposure is 0.15%, and that of a 0.01s exposure is 1.5%. That is, the longer you average, the more the "flicker noise" of the beam is averaged out. The trick with flicker noise, however, is that the relevant "exposure time" is the time it takes the relp to transit the Ewald sphere, and not the commanded "exposure time" of the image. This means that low-mosaic crystal data collections are more sensitive to beam instability.
An important property of flicker noise is that sampling more finely and averaging afterward does not change the signal-to-noise ratio (by definition, actually). You can try this by generating random numbers with a mean of "100" and rms error "10%" each. If you average them in groups of 100, the result will be 100 +/- 1%, and if you take them in groups of 10 and then average 10 of those groups at a time, you still get the same answer no matter what: 1% error. However, if you "attenuate the beam" 10 fold and collect 10x more data points (averaging 1000 values of 10 +/- 10%) you will get 10 +/- 0.3%, reducing the flicker noise by a factor of ~3. This may seem magical if you are used to thinking about photon-counting noise because photon-counting noise does not depend on how much time you take to collect the photons, but every other kind of noise does!
It is therefore important not to forget that counting detectors introduce a "new" kind of error: pile-up. The Lambert omega function can be used to correct for pile-up (and that is was Pilatus does), but this correction relies on the assumption that the "true" intensity over a given acquisition time was constant (Poisson statistics). This is definitely not the case with spots moving quickly through the Ewald sphere, and that is why Dectris recommends fine phi-slicing. Fine slicing has advantages on any detector (Pflugrath, 1999), but it is an absolute requirement for getting good data from any counting device, including the Pilatus. Of course, this does not mean you can't add up fine-sliced images after they are acquired to form a coarser-sliced data set, but that is a form of "lossy compression".
Formally, pile-up, beam flicker and shutter jitter fall into what I call the "% error" category, as does the noise induced by crystal vibration (usually buffeting in the cryostream) or simply a "noisy" velocity control on the spindle motor. But since continuous-readout detectors are not immune to beam flicker, sample vibration nor velocity control instability, I decided not to mention them earlier.
-James Holton
MAD Scientist
On 10/27/2011 6:36 AM, George M. Sheldrick wrote:
Date: 28 October 2011 20:19
Changed the subject line to try and keep the original thread on-topic.
George is right that the acceleration/decelleration introduces error, but only in what I like to call "standing start" mode. These days I think most facilities use "running starts" where the spindle is brought up to the degrees/second required by the the exposure just before opening the shutter at the desired "starting phi" position, and then closing the shutter at the "ending phi" with the sample still moving at full speed, allowing it to decelerate afterward. The "overhead" for achieving a stable velocity depends on the speed, of course, but this "prep time" is generally not more than a few hundred milliseconds. This is NOT the same thing as the shutter jitter! In fact, the "opening time" of the shutter is also not the jitter. Even if the shutter takes 100 ms to "open", as long as that is reproducible, it won't hurt the data quality. Shutter jitter is the rms deviation of all the "true opening times" from the average. I have not measured this on any beamline other than my own, but here the shutter jitter is rms ~0.6 millisecond, which translates to 0.0006 degrees at 1 degree/s. Assuming a mosaic spread of 0.5 degrees, this introduces an error as large as 0.3%. I say "as large as" because only partials are affected by shutter jitter, not fulls.
All that said, however, if Rmerge is 3%, then sqrt(3%^2-0.3%^2)= 2.985% error is due to something other than "the shutter".
We thought ourselves quite clever here at ALS 10 years ago when we implemented the running-starts for exposures on these new-fangled (at the time) "air bearing" spindles. Only to find that they didn't really make all that much difference when compared side-by-side with standing starts. Unless, of course, we started doing REALLY short exposures, like 0.08s or less. Then the noise power spectrum of the beam (flicker noise) starts to become important. This is actually easier to measure than you might think: just put up a photodiode and record samples of the "beam intensity" as fast as you can. The rms variation in the recorded intensity, divided by the square root of the sampling rate is the "flicker noise". I typically get 0.15%/sqrt(Hz), which means that the average variation in integrated beam intensity over a 1s exposure is 0.15%, and that of a 0.01s exposure is 1.5%. That is, the longer you average, the more the "flicker noise" of the beam is averaged out. The trick with flicker noise, however, is that the relevant "exposure time" is the time it takes the relp to transit the Ewald sphere, and not the commanded "exposure time" of the image. This means that low-mosaic crystal data collections are more sensitive to beam instability.
An important property of flicker noise is that sampling more finely and averaging afterward does not change the signal-to-noise ratio (by definition, actually). You can try this by generating random numbers with a mean of "100" and rms error "10%" each. If you average them in groups of 100, the result will be 100 +/- 1%, and if you take them in groups of 10 and then average 10 of those groups at a time, you still get the same answer no matter what: 1% error. However, if you "attenuate the beam" 10 fold and collect 10x more data points (averaging 1000 values of 10 +/- 10%) you will get 10 +/- 0.3%, reducing the flicker noise by a factor of ~3. This may seem magical if you are used to thinking about photon-counting noise because photon-counting noise does not depend on how much time you take to collect the photons, but every other kind of noise does!
It is therefore important not to forget that counting detectors introduce a "new" kind of error: pile-up. The Lambert omega function can be used to correct for pile-up (and that is was Pilatus does), but this correction relies on the assumption that the "true" intensity over a given acquisition time was constant (Poisson statistics). This is definitely not the case with spots moving quickly through the Ewald sphere, and that is why Dectris recommends fine phi-slicing. Fine slicing has advantages on any detector (Pflugrath, 1999), but it is an absolute requirement for getting good data from any counting device, including the Pilatus. Of course, this does not mean you can't add up fine-sliced images after they are acquired to form a coarser-sliced data set, but that is a form of "lossy compression".
Formally, pile-up, beam flicker and shutter jitter fall into what I call the "% error" category, as does the noise induced by crystal vibration (usually buffeting in the cryostream) or simply a "noisy" velocity control on the spindle motor. But since continuous-readout detectors are not immune to beam flicker, sample vibration nor velocity control instability, I decided not to mention them earlier.
-James Holton
MAD Scientist
On 10/27/2011 6:36 AM, George M. Sheldrick wrote:
There are two further complications. In non-continuous mode, the goniometer has
to accelerate at the start of a frame and decellerate at the end, then wait for
the frame to be read. So even if the shutter always functions perfectly, my
intuition tells me that it must be more accurate to rotate at constant speed
(however my intuition is often wrong). Secondly, in continuous mode, usually
not all pixels are read out at precisely the same time.
George
On Thu, Oct 27, 2011 at 11:05:01AM +0100, David Waterman wrote:
I agree with Colin here. Framing is simply a process of sampling an original
signal at some 'frequency' (related to the phi-width of each frame). At some
point, delta phi is small enough that the original signal is oversampled, and
can be reconstructed _within the bounds of noise_. Beyond that point I see no
advantage to sampling finer - and certainly not going to the limit of
representing your data in some unframed continuous readout form.
Perhaps I am missing something, and I realise this is another OT diversion from
this most fruitful of threads.
Cheers
-- David
On 27 October 2011 08:55, Martin M. Ripoll wrote:
Dear Colin,
I think you understood perfectly what George was saying regarding the loss
of information, but he will probably answer better than I.
In any case, and for the ones that did not understand it, what George was
telling is related to the fact that a data collection made with a
continuous
crystal rotation contains more information than when this information is
transformed into frames... The loss of information that we are referring to
has the same meaning as when we calculate electron density maps with
different grid sizes. The finer the grid, the greater is the information on
the map.
But you are right saying that the shorter the interval between produced
frames, the lower the loss of information. However, the procedure that you
are suggesting should have some limits... otherwise the amount of
information would grow dramatically.
All the best,
Martin
Dear George, Martin
I don't understand the point that one is throwing away information by
storing in frames. If the frames have sufficiently fine intervals (given by
some sampling theorem consideration) I can't see how one loses information.
Can one of you explain?
Thanks
Colin
--
Dear George, dear all,
I was just trying to summarize my point of view regarding this important
issue when I got your e-mail, that reflects exactly my own opinion!
Martin
________________________________________
Dr. Martin Martinez-Ripoll
This raises an important point. The new continuous readout detectors such
as
the
Pilatus for beamlines or the Bruker Photon for in-house use enable the
crystal to
be rotated at constant velocity, eliminating the mechanical errors
associated with
'stop and go' data collection. Storing their data in 'frames' is an
artifical
construction that is currently required for the established data
integration
programs but is in fact throwing away information. Maybe in 10 years time
'frames'
will be as obsolete as punched cards!
George
On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
>
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> we acquired 400 + GB of data*. Now I appreciate that this is not
> really routine operation, but it does raise an interesting point - if
> you have loaded a sample and centred it, collected test shots and
> decided it's not that great, why not collect anyway as it may later
> prove to be useful?
>
> Bzzt. 2 minutes or less later you have a full data set, and barely
> even time to go get a cup of tea.
>
> This does to some extent move the goalposts, as you can acquire far
> more data than you need. You never know, you may learn something
> interesting from it - perhaps it has different symmetry or packing?
> What it does mean is if we can have a method of tagging this data
> there may be massively more opportunity to get also-ran data sets for
> methods development types. What it also means however is that the cost
> of curating this data is then an order of magnitude higher.
>
> Also moving it around is also rather more painful.
>
> Anyhow, I would try to avoid dismissing the effect that new continuous
> readout detectors will have on data rates, from experience it is
> pretty substantial.
>
> Cheerio,
>
> Graeme
>
> *by "data" here what I mean is images, rather than information which
> is rather more time consuming to acquire. I would argue you get that
> from processing / analysing the data...
>
> On 24 October 2011 22:56, James Holton wrote:
> > The Pilatus is fast, but or decades now we have had detectors that can
read
> > out in ~1s. This means that you can collect a typical ~100 image
dataset in
> > a few minutes (if flux is not limiting). Since there are ~150
beamlines
> > currently operating around the world and they are open about 200
days/year,
> > we should be collecting ~20,000,000 datasets each year.
> >
> > We're not.
> >
> > The PDB only gets about 8000 depositions per year, which means either
we
> > throw away 99.96% of our images, or we don't actually collect images
> > anywhere near the ultimate capacity of the equipment we have. In my
> > estimation, both of these play about equal roles, with ~50-fold
attrition
> > between ultimate data collection capacity and actual collected data,
and
> > another ~50 fold attrition between collected data sets and published
> > structures.
> >
> > Personally, I think this means that the time it takes to collect the
final
> > dataset is not rate-limiting in a "typical" structural biology
> > project/paper. This does not mean that the dataset is of little value.
> > Quite the opposite! About 3000x more time and energy is expended
preparing
> > for the final dataset than is spent collecting it, and these efforts
require
> > experimental feedback. The trick is figuring out how best to compress
the
> > "data used to solve a structure" for archival storage. Do the
"previous
> > data sets" count? Or should the compression be "lossy" about such
> > historical details? Does the stuff between the spots matter? After
all,
> > h,k,l,F,sigF is really just a form of data compression. In fact, there
is
> > no such thing as "raw" data. Even "raw" diffraction images are a
> > simplification of the signals that came out of the detector
electronics.
> > But we round-off and average over a lot of things to remove "noise".
> > Largely because "noise" is difficult to compress. The question of how
much
> > compression is too much compression depends on which information (aka
noise)
> > you think could be important in the future.
> >
> > When it comes to fine-sliced data, such as that from Pilatus, the main
> > reason why it doesn't compress very well is not because of the spots,
but
> > the background. It occupies thousands of times more pixels than the
spots.
> > Yes, there is diffuse scattering information in the background pixels,
but
> > this kind of data is MUCH smoother than the spot data (by definition),
and
> > therefore is optimally stored in larger pixels. Last year, I messed
around
> > a bit with applying different compression protocols to the spots and
the
> > background, and found that ~30 fold compression can be easily achieved
if
> > you apply h264 to the background and store the "spots" with lossless
png
> > compression:
> >
> > http://bl831.als.lbl.gov/~jamesh/lossy_compression/
> >
> > I think these results "speak" to the relative information content of
the
> > spots and the pixels between them. Perhaps at least the "online
version" of
> > archived images could be in some sort of lossy-background format? With
the
> > "real images" in some sort of slower storage (like a room full of tapes
that
> > are available upon request)? Would 30-fold compression make the
storage
of
> > image data tractable enough for some entity like the PDB to be able to
> > afford it?
> >
> >
> > I go to a lot of methods meetings, and it pains me to see the most
brilliant
> > minds in the field starved for "interesting" data sets. The problem is
that
> > it is very easy to get people to send you data that is so bad that it
can't
> > be solved by any software imaginable (I've got piles of that!). As a
> > developer, what you really need is a "right answer" so you can come up
with
> > better metrics for how close you are to it. Ironically, bad,
unsolvable
> > data that is connected to a right answer (aka a PDB ID) is very
difficult to
> > obtain. The explanations usually involve protestations about being in
the
> > middle of writing up the paper, the student graduated and we don't
> > understand how he/she labeled the tapes, or the RAID crashed and we
lost
it
> > all, etc. etc. Then again, just finding someone who has a data set
with
the
> > kind of problem you are interested in is a lot of work! So is figuring
out
> > which problem affects the most people, and is therefore "interesting".
> >
> > Is this not exactly the kind of thing that publicly-accessible
centralized
> > scientific databases are created to address?
> >
> > -James Holton
> > MAD Scientist
> >
> > On 10/16/2011 11:38 AM, Frank von Delft wrote:
> >>
> >> On the deposition of raw data:
> >>
> >> I recommend to the committee that before it convenes again, every
member
> >> should go collect some data on a beamline with a Pilatus detector
[feel
free
> >> to join us at Diamond]. Because by the probable time any
recommendations
> >> actually emerge, most beamlines will have one of those (or similar),
we'll
> >> be generating more data than the LHC, and users will be happy just to
have
> >> it integrated, never mind worry about its fate.
> >>
> >> That's not an endorsement, btw, just an observation/prediction.
> >>
> >> phx.
> >>
> >>
> >>
> >>
> >> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
> >>>
> >>> For those who have strong opinions on what data should be
deposited...
> >>>
> >>> The IUCR is just starting a serious discussion of this subject. Two
> >>> committees, the "Data Deposition Working Group", led by John
Helliwell,
> >>> and the Commission on Biological Macromolecules (chaired by Xiao-Dong
Su)
> >>> are working on this.
> >>>
> >>> Two key issues are (1) feasibility and importance of deposition of
raw
> >>> images and (2) deposition of sufficient information to fully
reproduce
> >>> the
> >>> crystallographic analysis.
> >>>
> >>> I am on both committees and would be happy to hear your ideas
(off-list).
> >>> I am sure the other members of the committees would welcome your
thoughts
> >>> as well.
> >>>
> >>> -Tom T
> >>>
> >>> Tom Terwilliger
> >>>
> >>>
> >>>
> >>>>> This is a follow up (or a digression) to James comparing test set
to
> >>>>> missing reflections. I also heard this issue mentioned before but
was
> >>>>> always too lazy to actually pursue it.
> >>>>>
> >>>>> So.
> >>>>>
> >>>>> The role of the test set is to prevent overfitting. Let's say I
have
> >>>>> the final model and I monitored the Rfree every step of the way and
can
> >>>>> conclude that there is no overfitting. Should I do the final
> >>>>> refinement
> >>>>> against complete dataset?
> >>>>>
> >>>>> IMCO, I absolutely should. The test set reflections contain
> >>>>> information, and the "final" model is actually biased towards the
> >>>>> working set. Refining using all the data can only improve the
accuracy
> >>>>> of the model, if only slightly.
> >>>>>
> >>>>> The second question is practical. Let's say I want to deposit the
> >>>>> results of the refinement against the full dataset as my final
model.
> >>>>> Should I not report the Rfree and instead insert a remark
explaining
> >>>>> the
> >>>>> situation? If I report the Rfree prior to the test set removal, it
is
> >>>>> certain that every validation tool will report a mismatch. It does
not
> >>>>> seem that the PDB has a mechanism to deal with this.
> >>>>>
> >>>>> Cheers,
> >>>>>
> >>>>> Ed.
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Oh, suddenly throwing a giraffe into a volcano to make water is
crazy?
> >>>>> Julian, King of
Lemurs
> >>>>>
> >
>
--
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
No comments:
Post a Comment