From: Aaron Alt
Date: 18 November 2011 14:36
Hi all,
I have data indexed in I23 with ~3000 unique reflections. Having set
aside 10% of these my refinements still go berserk. The maps do look
fine though. The same happens when reindexed in lower symmetries.
Phenix autobuild finishes for example with 28/40 and I get similiar
results (although it took me longer) tracing manually and refining
with refmac. Does it make sense to set aside 500 reflections in my
case, which would be ~17% of the data? What is the correct way to
deal with data of this type? Ignoring the Rfree completely?
A nice weekend to all,
Aaron
----------
From: Pavel Afonine
Hi Aaron,
here is what I would do:
- create 10-20 independent test sets containing 5% of reflections (make sure lattice symmetry is taken into account - Phenix does it by default);
- solve and refine structure for each of the data set (make sure you use such a refinement strategy so you don't get very poor Rfree-Rwork gap (like you have right now: 28/40).
See how final models, maps and R-factors are different. That will give you an idea about reliability of the results you get and starting point for further thoughts.
Of course this is not the only way to tackle this problem, but a possibility.
Pavel
----------
From: Robbie Joosten
Hi Aaron,
You don't explain why you have so few reflections. Is it a small cell, low resolution or just really bad data?
Assuming it's not the last one and your data is reasonably complete, I would try this:
- Divide your reflections into six groups (and check that these groups are really of equal size).
- Refine with one set excluded and optimize your refinement protocol. Do a lot of cycles of refinement to ensure that the refinement converges.
- Generate maps using all reflections (i.e. do not exclude the set you excluded in refinement). If you leave out 17% of your reflection you either get poor maps due to missing Fourier terms or your maps will be very biased towards your model.
- Once you are content with your model. Do six refinements with different sets excluded like Pavel said. You can reset the B-factor if you worry about model bias. Use even more cycles of refinement than before to be sure your refinements converge.
- Report ALL the R-free values in your publication and describe the methods really well.
- Deposit the model with the R-free closest to the mean.
HTH,
Robbie
No comments:
Post a Comment