[phenixbb] help with validating a very low res structure

Nathaniel Echols nechols at lbl.gov
Thu Apr 5 18:10:13 PDT 2012

On Thu, Apr 5, 2012 at 5:55 PM, Yunji Wu <wuy at caltech.edu> wrote:
> I have an 8 angstrom structure solved using molecular replacement, refined to Rwork 0.38 and Rfree of 0.47. I also have a heavy atom (not SeMet) derivative data set to 10 angstroms. The idea is to get as much information from these data as possible so as to validate my current MR model. My approach is to calculate a difference map using the isomorphous difference Fouriers OR anomalous difference Fouriers as the amplitudes, with phases derived from the MR model-- then using the difference map to locate putative heavy atom sites. I would use various controls and experiments to (computationally) validate the credibility of these sites (e.g. cross-phasing with only iso/ano differences). Anyone have any suggestions for how to do this using Phenix? I use the GUI (generally the nightly builds), and am a newcomer to Phenix.

I am neither old school nor an experimental phasing expert, but there
are GUIs for creating anomalous difference maps (the standard "Create
maps" interface, which will automatically generate an anomalous map if
the data are anomalous) or isomorphous difference maps (further down
the list).  The other thing to try (in fact, the first thing I would
try in this case) would be to run MR-SAD in the Phaser-EP GUI, using
the refined model as input and telling it to complete the anomalous
substructure with whatever heavy atom you used.  The map it outputs
will be phase-combined, so still very biased by the model, but being
able to place the heavy atom is a good sign, and you can then take
this and run simple SAD phasing AutoSol with this as input (with
model-building disabled, obviously).

Having done a little bit of very-low-resolution refinement recently, I
think R-free of 0.47 is too high (the gap between R-factors is also
very large).  What refinement strategy were you using?  If you're
willing to share the data with us I'd be interested in taking a look,
because I'd like to figure out exactly how much refinement we can get
away with, and how well different strategies work.  Pavel has
suggested that individual B-factor refinement, if properly restrained,
may work best even for data like this, and my experiences have tended
to confirm this.


More information about the phenixbb mailing list