Monday, 30 April 2012

postdoctoral position at Sanford-Burnham, San Diego, CA


From: Rongsheng Jin
Date: 30 April 2012 21:13

One postdoctoral position is available in Dr. Rongsheng Jin's laboratory at the Sanford-Burnham Medical Research Institute, La Jolla, CA, starting in mid/late 2012. The new postdoctoral fellow is expected to participate in one of two major research projects: the structural and functional characterization of ion channels, receptors, and signaling molecules at synapses, and the mechanism of action of bacterial virulence factors in their cellular context. Our laboratory employs a multidisciplinary approach that combines X-ray crystallography, SAXS, electron microscopy with the more traditional approaches of biochemistry and biophysics. We also have built long-standing collaborative relationships in the areas of cell biology, electrophysiology, toxicology, and animal studies. The successful candidates should have a Ph.D. degree, high motivation, and strong background in one or more of the following research areas: biochemistry, biophysics, molecular biology and/or structural biology. Experience in insect cell and/or mammalian cell culture would be desirable, but is not required.

 

The Sanford-Burnham is located in La Jolla, San Diego. A wide variety of intellectual and technical resources are also available, such as UC San Diego, the Salk Institute, the Scripps Research Institute, and other numerous biotech companies. To apply for this position please send curriculum vitae and names of three references to Rongsheng Jin (rjinATburnham.org).

 

 

Evidence for hydrogen bonds between adjacent residues


From: arka chakraborty
Date: 22 February 2012 05:21


Hi all,

This is a general question regarding the formation of hydrogen bonds in proteins. I have found hydrogen bond between the back-bone atoms of the adjacent residues  using computational methods, ie, between NH4-CO5  of enkephalins. Is this  kind of feature acceptable and is there any such experimental evidence? I could'nt find any reference.

Thanks in advance,

Regards,

ARKO

-- 

Best method for weighted averaging of Friedel pairs?

From: Markus Meier
Date: 10 February 2012 19:47


Dear all,
I have a anomalous dataset, processed in HKL2000. Scalepack outputs a
file containing the separately merged sets of the Friedel pairs I- and
I+ and their standard deviations sigI+ and sigI-. Scalepack does not
output the averaged intensities (Imean) and the standard deviations
(sigIMean).

The CCP4 program truncate that I use to convert the intensities to
amplitudes requires Imean, I- and I+ and the respective standard
deviations in its input file.

I have now found at least three different methods to generate the
averaged intensities from the Friedel pairs:

1) scalepack2mtz

  uses standard deviations for the weights:
  weights w = 1/sigI

  Imean = (w+*I+ + w-*I- ) / (w+ + w-)
  sigImean = 1 / (w+ + w-)

2) Method described in Biomolecular crystallography by Bernhard Rupp, p.
332/333
  to average symmetry equivalent reflections

  uses variances for the weights:
  weight w = 1/sigI^2

  Imean = (w+*I+ + w-*I- ) / (w+ + w-)
  sigImean = 1 / sqrt(w+ + w-)

3) Method used in cctbx
  function miller.set.average_bijvoet_mates() that calls generic
merge.merge_equivalent_obs():

  same as methods 2, except that

  sigImean is the larger of either
    a) sigImean = 1 / sqrt(w+ + w-)
    or
    b) sigImean = sqrt( wvariance )

  where wvariance =
    (w+ + w-) / [ (w+ + w-)^2 - (w+^2 + w-^2) ] *
    [ w+*(F+ - Imean)^2 + w-*(F- - Imean)^2 ]

What are the advantages and disadvantages of each method? Should method
1 be used at all?

Some example from my dataset:
Reflection (1, 1, 0), space group P3 2 1

I+: 23841.50 sigI+: 634.01 I-: 9628.57, sigI-: 264.75
Method 1: Imean=13815.32, sigImean=186.76
Method 2: Imean=11738.95, sigIMean=244.31
Method 3: Imean=11738.95, sigIMean=7106.47

Thanks a lot!

Cheers,
Markus



----------
From: Markus Meier


On 11/02/12 02:52 PM, Bryan Lepore wrote:
> did you ever get a response on this? it is interesting but nobody
> posted publicly.
>
> -Bryan
>

Dear Bryan,

so far no one replied ... so please find my answer below. If someone
disagrees, please post.

None of the methods I have described are appropriate.

If the negative Bijvoet mates and the positive Bijvoet mates have been
merged separately to one intensity value for each (i.e. I+ or I-) plus
the associated standard deviation (sigI+ or sigI-), any weighted method
to calculate the mean will bias the intensity to either the I+ or the I-.

Therefore the only appropriate method is to use the unweighted mean:

Imean = 0.5*( I+ + I- )
sigImean = 0.5 * sqrt( sigI+^2 + sigI-^2 )

The only CCP4 program I found that actually does this is mtzMADmod. This
method also has the advantage that the original intensity values of I+
and I- can be reconstructed from the mean and the anomalous difference
(albeit with the loss of the original standard deviations).

Method 1 (scalepack2mtz)
should not be used. The resulting value is not the best estimate
(maximum likelihood)

Method 2 (in book by B. Rupp)
gives the maximum likelihood average in case that the reflections are
equivalent and is thus appropriate for the merging of the negative (or
positive) set of Bijvoet mates, centric reflections (where the anomalous
differences are zero) or in the case of an non-anomalous dataset the
merging of symmetry equivalent reflections.

Method 3
gives a more realistic sigma value in the case that the individual
intensity values are far apart and their individual standard deviations
are small. Consider the example I have posted:

I+: 23841.50 sigI+: 634.01 I-: 9628.57, sigI-: 264.75
Method 2: Imean=11738.95, sigIMean=244.31
Method 3: Imean=11738.95, sigIMean=7106.47

If the I+ and I- values above actually were symmetry equivalent
reflections in an non-anomalous dataset, the sigImean from method 2 is
ridiculously small and method 3 gives a far more realistic value. If
method 3 is the best mathematical solution to this problem I am not able
to judge and I have to trust the statistician (or programmer) who
implemented this solution.

Cheers,
Markus

----------
From: Tim Gruene 


Dear Markus,

why don't you reintegrate the data with hkl2000 telling the program to
treat them as non-anomalous data-set? This should give you scalepack
output with the Bijvoet pairs merged and overcome the problem you describe.

Cheers,
Tim
- --
-
----------
From: ccp4

This is a case where it is really helpful to keep some record of the unmerged integrated data.
And again rejecting the odd outlier does no harm to most analyses..

I like to use scala to check for outliers looking at all i+ I- measurements; if there is a wild discrepancy for weak anomalous signals you have probably found an outlier which is best rejected.

If you have a huge anomalous signal with good redundancy you probably shouldnt use Imean and the anomalous difference will be I= -I- with SD = sqrt (VarI+ VarI-).

Most software which uses that signal will check for outliers in the Anom difference lists too, and it is usually safest to exclude them from anom site searches, and phase calculations.
Eleanor

----------
From: arka chakraborty

Hi all,
Can someone put in links of a few articles relevant to the above discussion? ie where this kind of strategy was helpful in a specific practical situation?

Thanks in advance,

Regards,

ARKO
--



DNA in coot

From: LISA
Date: 15 February 2012 08:01


Hi all,

I am refining a structue  of protein-DNA complex with coot. I add DNA by "adding ideal DNA/RNA" in the other model. But I cannot edit chi angle of these nucletide, neither the mutate.  When I press the mutate  and my DNA, coot give amino acid not nucletide. Why?

Thanks

Lisa

----------
From: William G. Scott


My guess is it is user error.

I just built some DNA and hit the mutate button indicated with the yellow arrow and I get the
menu shown with five nucleotides (A C G T U):


----------
From: Tim Gruene 

Hello Lisa,

which version of coot do you use? Maybe it is outdated and that function
not yet properly implemented. I can confirm Bill's comment, and we work
with coot 0.6.2.

Cheers,
Tim
- --
- --
Dr Tim Gruene


----------
From: Xun Lu


Hi Lisa,

Please go check your PDB file.  Are those bases written out like "DT" or "THY" or "Td". Coot recognizes certain format for DNA bases but I forgot which one coot likes.  I don't have my laptop with me right now. My guess would be "Td".  :)

Best,

Xun


----------
From: Miguel Ortiz Lombardia


,

And that is compounded with the fact that depending on your
installation, Coot may be using its own libraries or the CCP4 ones. And
they may differ, especially if you're using the "new dictionnaries" for
refmac5. Plus the fact that depending on your preferences Coot converts
atoms to PDB v.2.x so they may not come back as you gave them to it. A
mess...

Paul, at the very least, it would be helpful if like in Lisa's case, an
ideal DNA/RNA is created consistent with whatever libraries Coot is
going to use for real space refining it.


Best regards,

-- Miguel



----------
From: Miguel Ortiz Lombardía 


Sorry for cross-posting, I thought some people might be interested and
only in the coot list.

Just to make clear my previous message: we have had exactly the same
problem than Lisa reports using the latest version (0.7-pre-3971)
available from Bill's repository of standalone Coot for Mac OSX 10.6. By
default, this version uses Coot libraries (/Library/Coot/...) I managed
to make it behave (in terms of DNA-related work) by forcing it to use
CCP4 6.2.0 libraries and then adding

(set-convert-to-v2-atom-names 0)

to my ~/.coot file.


--
Miguel


HR3699, Research Works Act

From: Raji Edayathumangalam
Date: 15 February 2012 22:53


If you agree, please signing the petition below. You need to register on the link below before you can sign this petition. Registration and signing the petition took about a minute or two.

Cheers,
Raji

---------- Forwarded message ----------
From: Seth Darst


Rep. Caroline Maloney has not backed off in her attempt to put forward the interests of Elsevier and other academic publishers. 

If you oppose this measure, please sign this petition on the official 'we the people' White House web site. It needs 23,000 signatures before February 22nd and only 1100 so far. Please forward far and wide. 


Oppose HR3699, the Research Works Act

HR 3699, the Research Works Act will be detrimental to the free flow of scientific information that was created using Federal funds. It is an attempt to put federally funded scientific information behind pay-walls, and confer the ownership of the information to a private entity. This is an affront to open government and open access to information created using public funds.

This link gets you to the petition:




--
Raji Edayathumangalam



----------
From: Tim Gruene 

Dear Raji,

maybe you could increase the number of supporters if you included a link
to (a description of) the content of HR3699 - I will certainly not sign
something only summarised by a few polemic sentences ;-)

Cheers,
Tim
- --


----------
From: Boaz Shaanan

I initially thought that it had to do with a new Hampton Research thing.

But can non-American citizens sign the petition too?

      Boaz








----------
From: Adrian Goldman


I signed, and I think so.  Further information can be found here:

http://www.guardian.co.uk/science/2012/feb/02/academics-boycott-publisher-elsevier

----------
From: Ian Tickle


Reading the H.R.3699 bill as put forward
(http://thomas.loc.gov/cgi-bin/bdquery/z?d112:HR03699:@@@L&summ2=m&)
it seems to be about prohibiting US federal agencies from having
policies which permit, authorise or require authors' assent to break
the law of copyright in respect of published journal articles
describing work funded at least in part by a US federal agency.  I'm
assuming that "network dissemination without the publisher's consent"
is the same thing as breaking the law of copyright.

It seems to imply that it would still be legal for US federal agencies
to encourage others to break the law of copyright in respect of
journal articles describing work funded by say UK funding agences! -
or is there already a US law in place which prohibits that?  I'm only
surprised that encouraging others to break the law isn't already
illegal (even for Govt agencies): isn't that the law of incitement
(http://en.wikipedia.org/wiki/Incitement)?

This forum in fact already has such a policy in place for all journal
articles (i..e not just those funded by US federal agencies but by all
funding agencies), i.e. we actively discourage postings which incite
others to break the law by asking for copies of copyrighted published
articles.  Perhaps the next petition should seek to overturn this
policy?

This petition seems to be targeting the wrong law: if what you want is
free flow of information then it's the copyright law that you need to
petition to overturn, or you get around it by publishing in someplace
that doesn't require transfer of copyright.

Cheers

-- Ian

----------
From: Herbert J. Bernstein


Dear Ian,

 You are mistaken.  The proposed law has nothing to do with preventing the
encouragement people to break copyright law.  It has everything to do with
trying to kill the very reasonable NIH open access policy that properly
balances the rights of publishers with the rights of authors and the interests of
the scientific community.  Most publishers fare quite well under a policy that
gives them a year of exclusive control over papers, followed by open access.

 It is, unfortunately, a standard ploy in current American politics to make  a
law which does something likely to be very unpopular and very unreasonable
sound like it is a law doing something quite different.

 Please reread it carefully.  I think you will join in opposing this law.  Science
benefits from the NIH open access policy and the rights of all concerned
are respected.  It would be a mistake to allow the NIH open access policy to
be killed.

 I hope you will sign the petition.

 Regards,
   Herbert

----------
From: Herbert J. Bernstein


The bill summary says:

Research Works Act - Prohibits a federal agency from adopting, maintaining, continuing, or otherwise engaging in any policy, program, or other activity that: (1) causes, permits, or authorizes network dissemination of any private-sector research work without the prior consent of the publisher; or *(2) requires that any actual or prospective author, or the author's employer, assent to such network dissemination. *

Defines "private-sector research work" as an article intended to be published in a scholarly or scientific publication, or any version of such an article, that is not a work of the U.S. government, describing or interpreting research funded in whole or in part by a federal agency and to which a commercial or nonprofit publisher has made or has entered into an arrangement to make a value-added contribution, including peer review or editing, but does not include progress reports or raw data outputs routinely required to be created for and submitted directly to a funding agency in the course of research.

==========================================

It is the second provision that really cuts the legs out from the NIH open access policy. What the NIH policy does is to make open access publication a condition imposed on the grant holders in publishing work that the NIH funded. This has provided the necessary lever for NIH-funded authors to be able to publish in well-respected journals and still to be able to require that, after a year, their work be available without charge to the scientific community. Without that lever we go back to the unlamented old system (at least unlamented by almost everybody other than Elsevier) in which pubishers could impose an absolute copyright transfer that barred the authors from ever posting copies of their work on the web. People affiliated with libraries with the appropriate subscriptions to the appropriate archiving services may not have noticed the difference, but for the significant portions of both researchers and students who did not have such access, the NIH open access policy was by itself a major game changer, making much more literature rapidly accessible, and even more importantly changed the culture, making open access much more respectable.

The NIH policy does nothing more than put grant-sponsored research on almost the same footing as research done directly by the government which has never been subject to copyright at all, on the theory that, if the tax-payers already paid for the research, they should have open access to the fruits of that research. This law would kill that policy. This would be a major step backwards.

Please read:

http://blogs.scientificamerican.com/evo-eco-lab/2012/01/16/mistruths-insults-from-the-copyright-lobby-over-hr-3699/

http://www.taxpayeraccess.org/action/action_access/12-0106.shtml

http://www.care2.com/causes/open-access-under-threat-hr-3699.html

Please support the petition. This is a very bad bill. It is not about protecting copyright, it is an effort to restrict the free flow of scientific information in our community.

Regards,
Herbert



----------
From: Ian Tickle


Dear Herbert

Thanks for your detailed explanation.  I had missed the important
point that it's the requirement on the authors to assent to open
access after a year, which the proposed Bill seeks to abolish, that's
critical here.

I will go and sign the petition right now!

Best wishes

-- Ian

On 16 February 2012 15:24, Herbert J. Bernstein

----------
From: Paula Salgado 


May I also suggest reading these:

https://intechweb.wordpress.com/2012/01/25/selected-reading-on-research-works-act-why-you-should-care/

https://intechweb.wordpress.com/2012/02/16/open-access-on-a-string-cut-it-and-it-will-grow-back/

Can non-US based scientists sign the petition, btw?


There are also several blogposts and discussions around regarding RWA
and subsequent calls for boycotts of publishers that support it. A few
examples (mostly from the UK):

http://cameronneylon.net/blog/the-research-works-act-and-the-breakdown-of-mutual-incomprehension/

http://gowers.wordpress.com/2012/01/21/elsevier-my-part-in-its-downfall/

http//www.elsevier.com/wps/find/intro.cws_home/elsevieropenletter

http//occamstypewriter.org/scurry/2012/02/12/an-open-letter-to-elsevier/

And my (very) personal views on the matter:
http://www.paulasalgado.org/archives/423

Best wishes
Paula


----------
From: Tanner, John J.


There was an op-ed piece in the NY Times last month about this issue written by Michael Eisen, a found of PLos:

http://www.nytimes.com/2012/01/11/opinion/research-bought-then-paid-for.html?ref=carolynbmaloney

----------
From: Ian Tickle 


> Can non-US based scientists sign the petition, btw?

Well there's nothing to stop you!  It asks for your zip code, but I
just left it blank and it accepted it.

Cheers

-- Ian

----------
From: Charles W. Carter, Jr <


For what it's worth, my own experience with the issue of scholarly publication and open access is nuanced enough that perhaps my two-bits worth can add to this discussion. In short, I agree both with Ian's previous message and with Herbert, and feel that the incompatibility between them goes to the root of a problem for which the answer is certainly not quite there.

I have been much influenced by the work done on this issue by Fred Dylla, Executive Director of the American Institute of Physics. Here is a link to recent information concerning his four-year effort to reach consensus on this issue:

http://www.aip.org/aip/aipmatters/archive/2011/1_24_11.html

I personally think that the NIH Open Access requirement is a vast overreach. PubMed Central is very difficult to use and ultimately has never satisfied me:  I always go to the UNC library holdings. There are several reasons why. The most immediate is that PubMed Central almost never gives a satisfactory copy of a paper I want to read, and the most serious reason is that I am convinced that the overhead exacted on authors and PIs by the NIH means that few, if any authors give much more than a glance in the direction of updating deposited manuscripts from journals that do not automatically deposit the version of record. For this reason, many PubMed Central entries are likely to have more than minor errors corrected in proof only in the version of record. I don't personally see any way around the problem that there is only one version of record and that version is the one for which copyright is retained by the publisher.

On the other hand, I am deeply sympathetic to the argument that publicly-funded research must be freely accessible. After talking intensely with the library administrators at UNC, I also believe deeply that university library subscriptions satisfy the need for open access. Casting aside for the moment the issue of Open Access journals, whose only real difference lies in who pays the costs of publication, I have long believed that careful validation through peer review constitutes serious added value and that journals are entitled to being paid for that added value. What makes this issue more difficult for me is that I share with many the deep suspicions of corporate (as opposed to Member Society) publishing organizations. Several years ago I withdrew my expertise from the Nature group in protest over what I felt (after, again, long discussions with our UNC librarians) was a power play designed only to weaken the library systems. I have similar views about Elsevier.

Finally, I am inclined to sign this petition for other reasons, including the fact that HR 3699 appears to be as deeply flawed in the other direction as the original enabling legislation that vested such power in the NIH and, in the same act, all but eliminated any opposition by diluting responsibility for compliance to the fullest possible extent, by penalizing PIs for non-compliance. When I first read of this petition, I was deeply incensed that the wing nuts in Congress would craft a bill so obviously designed to reward the 1%, so to speak.

In closing, I earnestly recommend that as many of you as possible look into Fred Dylla's work on this issue. The AIP is a publisher whose only revenue other than philanthropy comes from the intellectual property and added value of its journals, some of which represent the finest in physical chemistry relevant to our community. Dylla deserves kudos for his effort to find consensus, something that seems to have gone way out of fashion in recent years.

Charlie

----------
From: Pedro M. Matias

Can non-US residents sign this petition? You need a Whitehouse.gov account and in order to register you have to provide a U.S. (I presume) zipcode.


            
----------
From: Ian Tickle


Pedro,

Well it worked for me (and I see many others) without a zip code.  I
see that someone else typed "Bayreuth" in the zip code field - so I
suspect you can type anything there!

Cheers

-- Ian

----------
From: Enrico Stura


I am strongly in favour of Open Acess, but Open Access is not always helped
by lack of money for editing etc.

For example:
Acta Crystallographica is not Open Acess.
In one manner or another publishing must be financed.
Libraries pay fees for the journals. The fees help the International Union of Crystallography.
The money is used for sponsoring meetings, and some scientists that come from less rich
institutions benefit from it.

Open Acess to NIH sponsored scientific work will be for all world tax payers and tax doggers as well.
OR May be you would suggest that NIH sponsored work should be accessed only by US tax payers with a valid social security number?
The journal server will verify that Tax for the current year has been filed with the IRS server. A dangerous invasion of privacy!
The more legislation we add the worse off we are.

If the authors of a paper wants their work to be available to the general public there is Wikipedia.
I strongly support an effort by all members of ccp4bb to contribute a general public summary of their work on Wikipedia.
There are Open Source journals as well.

I would urge everybody NOT to sign the petition. Elsevier will not last for ever, and the less
accessible the work that they publish, the worse for them in terms of impact factor.
In the old days, if your institution did not have the journal, most likely you would not reference the work
and the journal was worth nothing.
We are the ones that will decide the future of Elsevier.  Elsevier will need to strike a balance between excellent
publishing with resonable fees or not getting referenced. A law that enforces a copyright will not help them.
They are wasting their money on lobbing.

The argument that NIH scientist need to publish in High Impact Factor Journals by Elsevier does not hold up:
1) We should consider the use of impact factor as a NEGATIVE contribution to science.
2) Each article can now have its own impact factor on Google Scholar, independent on the journal it is published in.
3) Even for journals not indexed on PubMed,  Google Scholar finds them.

I hold the same opinion for the OsX debate.
Don't buy Apple! Use linux instead. When enough people protest where it really hurts the
company, they will no longer have the money to lobby the American Congressmen.
If they make an excellent product, then they deserve the money and quite rightly they
can try to build a monopoly around their technology. I fight that, I use LINUX.

By signing petitions we acknowledge the power of the legislators. This is another form
of lobbing. If we disapprove of lobbing we should not engage in the practice even if we give
no money.
We have more powerful means of protest. The 24 Hour shutdown of Wikipedia meets my approval.

There is also patenting. How do we feel about it?
Some of the work I have done has also been patented. I do not feel right about it.

There is MONEY everywhere. This ruins our ability to acqure knowledge that should be free for everybody.
But since it costs to acquire it, it cannot be free.
LAWS should be for the benefit of the nation. But legislators have the problem of money to be re-elected.
Can we trust them?
Can we trust their laws?

Companies also play very useful roles. Some companies less so.
But at least they work for a profit and thus they must provide a worth while service.
This is not true for politicians.


Enrico.


-

----------
From: Enrico Stura


Charlie,

A much more balanced view than others have posted.

NIH Open Access requirement is a vast  overreach.
I agree.
HR 3699 appears to be as deeply flawed.
It could be made better with amendments?

Enrico.

----------
From: Herbert J. Bernstein


Dear Colleagues,

 Acta participates very nicely and fully in the NIH Open Access program.  After
one year of the normal restricted access any NIH-funded paper automatically
enters the NIH open access system.  The journals get to get their revenue when
the paper is most in demand, but the community is not excessively delayed in
free access.

 I most certainly do suggest that it is a good idea for people who are not US taxpayers
to also have access to the science the NIH funding produces.  We will all live longer
and happier lives by seeing a much progress made as rapidly as possible world-wide
in health-related scientific research.  I would hate to think of the cure of a disease being
greatly delayed because some researcher in Europe or India or China could not get
access to research results.  We all benefit from seeing the best possible use made
of NIH-funded research.

 I agree that in this case, adding more legislation is a bad idea -- particularly
adding this legislation.

 I agree that


"If the authors of a paper wants their work to be available to the general public there is Wikipedia.
I strongly support an effort by all members of ccp4bb to contribute a general public summary of their work on Wikipedia.
There are Open Source journals as well. "

However, there is a practical reality for post-docs and junior faculty that, at least in the US,
most institutions will not consider Wikipedia articles in tenure and promotion evaluations,
so it really is a good idea for them to, in addition to publishing in Wikipedia, to write "real"
journal articles.  I also agree that using open source journals in a good idea in the abstract,
but I, for one, really don't want the IUCr journals to go away, and the NIH Open Access
policy allows me to both support the IUCr and have my work become open access a
year later.  I think it is a wonderful compromise.  Please, don't let the perfect be the
enemy of the good.  If we don't prevent Elsevier from killing NIH Open Access with
this bill, then there is a risk that many fewer people will publish in the IUCr publications.

You seem to be arguing strongly that we should both have Open Access and have money
for editing journals.  I agree.  The current NIH Open Access policy does just that.
It is the pending bill that will face you with the start choice of either having Open Access
or having edited journals.  You come much closer to your goals if you sign the petition
and help the NIH Open Access policy to continue in force, than if the bill passes and
the NIH Open Access policy dies.  If the Open Access policy dies, I for one will face
a difficult choice -- publish in the IUCr journals and pay them an open access fee
I may not be able to come up with, or publish in free, pure open source journals
but fail to support the IUCr.  Let is hope the petition gets lots of signatures and this
misguided bill dies.

Regards,
 Herbert

----------
From: Herbert J. Bernstein


Dear Colleagues,

If you want an excellent, painless transfer from journal to PUBMED, just stick
to the IUCr journals.  They do an excellent job of cooperating in the NIH
open access policy with an automatic transfer of the clean refereeded and edited
paper to PUBMED.  Yes, the IUCr journal copy does look prettier -- more
power to them  -- but nothing is missing from the PUBMED version, so
everybody benefits:  the IUCr has its subscription money from libraries and
individuals who need results as quickly as possible or in the best form, and
students and researchers without an institutional subscription can still get
a completely valid and complete copy on line.

If you pay IUCr for open access and are NIH funded, they deposit in PUBMED
immediately.  If you don't pay IUCr for open access and are NIH funded, they
deposit in PUBMED a year after publication.  Either way it works and works well,
you get excellent editing, you are publishing in very respectable journals, and
your work ends up available to everybody.

So, if you want a balanced, nuanced approach, please sign the petition, but also
publish in the IUCr journals if you work fits, but don't publish in any journals
that don't do automatic deposition or that support the NIH Open Access policy poorly.

Regards,
 Herbert

----------
From: John R Helliwell


Dear Colleagues,
Further to Herbert's summary, which I support, the publishers allowing
the deposition of their published version/PDF in Institutional
Repositories can be found listed here:-

http://www.sherpa.ac.uk/romeo/PDFandIR.php?la=en

Best wishes,
John


question about input .hkl file for SHELXD

From: Lu Yu
Date: 19 February 2012 04:06


Hi,

I was confused with the input .hkl file for SHELXD. I was using ccp4 to prepare these files, and I am not sure whether I was doing it correctly.

1st I did scale2mtz to generate .mtz file
2nd I used mtz2various to convert .mtz to .hkl format. I also found a program prephadata which can convert .mtz to .hkl, however, the hkl file generated from these 2 programs are different from each other. I don't know which one to use.

I was wondering whether anyone of you have tried to do the conversion, and is there any special option box from the ccp4 that I need to click when I used the scale2mtz and mtz2various?
 
Thanks for your help!

Lu



----------
From: Graeme Winter


Hello Lu,

I would usually suggest using mtz2sca (f you have an MTZ with
intensities) to get a scalepack format file, which you may have
already. Then I would use shelxc to generate the .ins and .hkl file
for shelxd.

You can do this through ccp4i from an MTZ file, or write a script as
detailed in:

http://shelx.uni-ac.gwdg.de/SHELX/fastphas.pdf

Best wishes,

Graeme


questions about SHELXD

From: Lu Yu
Date: 18 February 2012 23:39


Hi all,

I was trying to use SHELXD program for protein peptides (6-7 residues) for the very first time, and I got the .pdb file which should be the correct solution. However, in the .pdb file, the atoms are labeled as ABC and they are not recognized as amino acids.

My question is normally what program can I run after SHELXD, to put those atoms into residues in the correct order so that I can use refmac to refine the structure?

Another question is, I have another peptide which has P1 space group, and the SHELXD won't start and it said "the cell is too small to put atoms randomly", in this case, can I still use SHELXD for the structure solving and what should I do? If not, what other programs can I use to solve small peptide structures with 6-7 residues?

Thanks for your help!!

Lu

----------
From: George Sheldrick


Dear Lu Yu,

Most readers of this list will only be familiar with the use of SHELXD to find heavy atom sites for experimental phasing, but it appears that you are using it for small molecule direct methods, which in fact is what the program was originally written for. For small molecule direct methods you MUST have a resolution at which the atoms are resolved from each other, i.e. 1.2A or better, and 1.0A is a big improvement over 1.2A. For experimental phasing the heavy atoms are further apart and so much lower resolution may succeed, I have heard of cases where even 10A data were sufficient to find heavy atom clusters.

For small molecule direct methods you will need an name.hkl file containing h,k,l,I and sigma(I) (HKLF 4 format) or h,k,l,F and sigma(F) (HKLF 3 format), as specified by the HKLF instruction at the end of the name.ins file for SHELXD. If possible you should use intensities, most integrating and scaling systems can write the HKLF 4 directly without going through mtz format. Note that all SHELX programs merge equivalents and reject systematic absences as required and that the reflections may be in any order.

If SHELXD says  "the cell is too small to put atoms randomly" then you are asking it to find more atoms (the FIND instruction) than fit into the asymmetric unit. You should ask for less, check the CELL instruction for a typo or maybe you have a salt crystal. Note that the space group P1 is specified by LATT -1 and no SYMM cards. If you put in LATT 1 or leave it out the space group P-1 will be assumed.

For such a small structure it might be better to use small molecule programs for the refinement, otherwise you will have problems when you try to deposit and publish the structure. I would (of course) recommend SHELXL plus the SHELXLE GUI for this, but you should also try to find an experienced small molecule crystallographer to help you to get started, almost every chemistry department has at least one. If this does not resolve your problems I would be happy to look at your data.

Best wishes, George


Structure Refinment problem

From: Muhammed bashir Khan
Date: 19 February 2012 18:38


Dear all;

I have a structure at 3.3A resolution of about 140kDa protein containing
eight domains, in tetragonal space group.I also have the structure of most
of the individual domains.I almost refined the structure in all the
possible space groups, the best space group at the moment are the P42 with
the R/Rfree of 32 and 38%. Could somebody suggest what else I should try
to to get better R/Rfree values. I am using Phenix for refinment.
Regarding the protein it contain quite flexible domains.

Thanks in advance for your suggestions.

Regards,

Bashir



----------
From: Bosch, Juergen


I don't see a real problem, you seem to be doing the right thing.
Check out the Molprobity server and see what it suggests to improve on your structure.
You could run pointless to verify that P42 is the true space group but looking at your ∆Rfactor of 6% you likely have the right solution.
Do you have one monomer with eight domains in the asu our do you have more than one copy? If so then identify the regions which follow NCS and apply them.

Jürgen
..................



----------
From: Pavel Afonine

Hi Bashir,

the R-factors seem to be hight given the resolution. I don't know what you have tried so far and what you haven't, and the list of things to try is too long. It might be more efficient If you send me the data and model (off-list), so I have a quick look and may be suggest something.  

Pavel


High R factor

From: Dipankar Manna
Date: 20 February 2012 10:48


Dear Sir/Madam,

 

I am very new to this CCP4 program. Usually I know that after rigid body refinement the R factor reduces from the R factor what ever we get from Molrep. One of my data is showing some different characteristics. After running Molrep the R factor is showing 38% and score is 64%, but when I run rigid body refinement (Refmac5) the Rfactor is showing 46.07% and Rfree is 46.27. is it possible? Or else what I have to do with this data.

 

Please suggest.

 

Regards

 

Dipankar Manna  




----------
From: Laurent Maveyraud

Hi,

it might well be possible that molrep used only lower resolution data (the default cutoff isd 2.5 A, if I remember correctly), when refmac uses all available data in the MTZ file...
Check that both steps were performed at the same resolution !

Another possibility is that molrep performed the search in different spacegroups than the one specified in the MTZ file (eg P4 selected during data processing, and molrep checks, P4, P41, P42 and P43). If your solution corresponds to the P41 spacegroup, you have to modify your MTZ file (or use the one produced by molrep)... otherwise refmac will perform the rigid body step in P4 and not P41...
This should be indicated in the molrep logfile... check it carefully !

hope this helps. If not please send the logfiles !

Laurent



FFT map coefficients only for certain chains

From: zhang yu
Date: 19 February 2012 22:46


Dear CCP4ers,

Is that possible to generate map coefficients only for certain chains? For example, I have two chains, A and B, and I would like to output a map file only contains coefficients for chain A. The "isomesh" command in Pymol could generate similar images. But my purpose is not for presentation, I need a map file only contains coefficient for certain chains. 

In the interface of "FFT" tool in Phenix or CCP4, there is an option to include a PDB file and define atom selections. It describe that "If a PDB is supplied, the output map will cover the model plus a buffer on all sides. The atom selection parameters can be used to specify a smaller region" . If I define the selection as chain A when I run the FFT, the output map still covers a rectangular block containing chain A, instead of regions only surrounding chain A.

Thanks.

Yu

 

--
Yu Zhang




----------
From: <Herman.Schreuder

Dear Yu,
 
You seem to be mixing up map coefficients and map files. Map coefficients are modified reflections, e.g. 2*Fobs-Fcalc multiplied by a weighting factor with h,k,l indices and are stored in a .mtz file. From these map coefficients, you can calculate a map file with an FFT program. A map is an array with local values of the electron density spaced say 1Å apart with extentions like .map. If you provide Coot with just map coefficients, it will calculate on the fly the map around your centering atom so it does not need a map file.
 
Since every part of real space (the contents of the unit cell) contributes to every reflection, it is not possible to generate "observed" map coefficients for one chain alone. I do not know what options you exactly tried, but normally, if you select a certain chain, it will calculate a map based on all chains, but only output the map around the selected chain.
 
To get map (coefficients) of only one chain, there are two options:
1) a straight Fcalc map. Only input a pdb file with the chain you want to select. Do not use observed F's. You will get calculated density for that chain only and the rest of the map will be empty (zeroes).
2) a (vector difference) Fo-Fc omit map. Input a pdb file without the chain you want to select and calculate an Fo-Fc map. All chains will be subtracted from Fo with the exception of the chain you left out from the Fc calculation. This map will contain density of the omitted chain plus noise due to the fact that your model is imperfect (the famous 20% Rfactor of well-refined structures). The generate a vector difference map, you should use phases calculated for the complete model for Fo, and omit-phases for Fc.
 
I hope this clarified things a bit,
Herman

----------
From: Paul Emsley



Like Herman, it was not apparent to me what exactly you want wanted.

If you want a map that covers only [1] or excludes [2] a particular atom selection then you can do that in Coot. 

Extensions -> Maps... -> Mask Map by Atom Selection,

e.g. to show only the "A" chain, use an atom selection of "//A".

You can then export the map using:

Extensions -> Maps... -> Export Map  
or
Extensions -> Maps... -> Export Local Map Fragment...

If you really want structure factors from that then you can use sfall (MODE SFCALC MAPIN).

Paul.

[1] use mask inversion
[2] don't use mask inversion



----------
From: zhang yu

Herman,

Thanks for clarifying the difference between map coefficients and map files for me. What I really need are only map files, and I could generate those masked maps by following Paul's suggestion.

Yu


real dimer vs crystal packing

From: Qiang Chen
Date: 20 February 2012 21:21


Thanks, Artem!
PDE2A also uses three domains to form a homodimmer. However, without the
catalytic domain, the GAF B domain swings out. This is an excellent
example for the restrictions set by the multidomain context.

Anyone knows other examples?
Thanks a lot!

> PDE2 full length versus domain only structures, I think.
>
> Artem
>
> On Thu, Jan 12, 2012 at 10:03 AM, Qiang Chen >wrote:
>
>> Hi,
>>
>> I'm working on a multi-domain protein which uses three domains to form
>> homodimer, and the full-length structure is available. We have solved
>> the
>> structure of one binding domain alone and found its homo-binding mode is
>> totally different from that of the full-length protein.
>>
>> Do you know examples or papers discussing the similar subjects (crystal
>> packing shows false binding mode)?
>>
>> Thanks a lot!
>>
>> Qiang
>>
>>
>>
>>

choice of wavelength

From: Seungil Han
Date: 15 February 2012 22:23


All,
I am curious to hear what our CCP4 community thoughts are....
I have a marginally diffracting protein crystal (3-3.5 Angstrom resolution) and would like to squeeze in a few tenth of angstrom.
Given that I am working on crystal quality improvement, would different wavelengths make any difference in resolution, for example 0.9 vs. 1.0 Angstrom at synchrotron?
Thanks.
Seungil


----------
From: Jacob Keller


I would say the better practice would be to collect higher
multiplicity/completeness, which should have a great impact on maps.
Just watch out for radiation damage though. I think the wavelength
will have no impact whatsoever.

JPK
--


----------
From: Bosch, Juergen


No impact ? Longer wavelength more absorption more damage. But between the choices given no problem.
Spread of spots might be better with 1.0 versus 0.9 but that depends on your cell and also how big your detector is. Given your current resolution none of the mentioned issues are deal breakers.

Jürgen

......................
Jürgen Bosch


----------
From: Jacob Keller


Well, but there is more scattering with lower energy as well. The
salient parameter should probably be scattering per damage. I remember
reading some systematic studies a while back in which wavelength
choice ended up being insignificant, but perhaps there is more info
now, or perhaps I am remembering wrong?

Jacob

----------
From: Francis E Reyes


Acta Cryst. (1997). D53, 734-737    [ doi:10.1107/S0907444997007233 ]

The Ultimate Wavelength for Protein Crystallography?

I. Polikarpov, A. Teplyakov and G. Oliva

http://scripts.iucr.org/cgi-bin/paper?gr0657



may give some insights.


To the OP, have you solved the structure? In some cases, seeing the packing at low resolution can give you ideas on how to change the construct to obtain higher diffracting crystals.



F

----------
From: Bart Hazes


Diffracted intensity goes up by the  cube of the wavelength, but so does absorption and I don't know exactly about radiation damage. One interesting point is that on image plate and CCD detectors the signal is also proportional to photon energy, so doubling the wavelength gives 8 times diffraction intensity, but only 4 times the signal on integrating detectors (assuming the full photon energy is captured). So it would be interesting to see how the equation works out on the new counting detectors where the signal does not depend on photon energy. Another point to take into account is that beamlines can have different optimal wavelength ranges. Typically, your beamline guy/gal should be the one to ask. Maybe James Holton will chime in on this.

Bart



----------
From: John R Helliwell


Dear Colleagues,
I think the following paper will be of particular interest for some
aspects of this thread:-

J. Appl. Cryst. (1984). 17, 118-119    [ doi:10.1107/S0021889884011092 ]
Optimum X-ray wavelength for protein crystallography
U. W. Arndt
Abstract: If the diffraction pattern from crystalline proteins is
recorded with shorter wavelengths than is customary the radiation
damage may be reduced and absorption corrections become less
important.

Best wishes,
John
Professor John R Helliwell DSc
--

----------
From: A Leslie


On 15 Feb 2012, at 23:55, Bart Hazes wrote:

Diffracted intensity goes up by the  cube of the wavelength, but so does absorption and I don't know exactly about radiation damage. One interesting point is that on image plate and CCD detectors the signal is also proportional to photon energy, so doubling the wavelength gives 8 times diffraction intensity, but only 4 times the signal on integrating detectors (assuming the full photon energy is captured). So it would be interesting to see how the equation works out on the new counting detectors where the signal does not depend on photon energy.


You make a good point about the variation in efficiency of the detectors, but I don't think your comment about the "new counting detectors" (assuming this refers to hybrid pixel detectors) is correct. The efficiency of the Pilatus detector, for example, falls off significantly at higher energies simply because the photons are not absorbed by the silicon (320  microns thick). The DQE for the Pilatus is quoted as 80% at 12KeV but only 50% at 16KeV and I think this variation is entirely (or at least mainly) due to the efficiency of absorption by the silicon.

Andrew


----------
From: Bart Hazes


Hi Andrew,

I completely agree and it is what I meant by "(assuming the full photon energy is captured)". If the fraction of photons counted goes up at longer wavelengths than the relative benefit of using longer wavelength is even more pronounced on Pilatus. So for native data sets the wavelength sweet spot with a pilatus detector may be a bit longer then what used to be optimal for a given beamline on a previous generation detector.

Bart

----------
From: Colin Nave


Bart
>> Diffracted intensity goes up by the  cube of the wavelength, but so
>> does absorption and I don't know exactly about radiation damage.

I think this statement should be
"As an approximation, diffracted intensity (integrated) goes up by the square of the wavelength, but so
does absorbed energy."

See for example
Arndt, U. W. (1984) Optimum X-ray wavelength for protein crystallography. J. Appl. Cryst. 17, 118-119.
Fig. 1. Plot of Ie (not Ip). Note that this includes loss of signal due to absorption through the sample. Subsequent calculations have included Compton scattering and other factors.

I believe Zachariasen first specifically pointed out the wavelength dependence of the integrated intensity (one has to include a Lorentz factor). For the second factor, the absorption of a photon approximately follows the cube of the wavelength but the absorbed energy itself follows the square of the wavelength.

Regards
 Colin

----------
From: James Holton


The short answer is: no.  Wavelength does not matter.  Not for native data anyway.

I wrote a paper about this recently.  It is open access: http://dx.doi.org/10.1107/S0907444910007262

In particular, check out Figure 2.  The two solid lines are pretty darn flat, and that means the wavelength dependence of damage and scattering power almost exactly cancel.  More on the dotted lines in a bit...

It is easy to screw up the "scattering per damage" calculation as there are many mathematical pitfalls.  Perhaps the trickiest one is thinking that the longer detector distances that would be used at shorter wavelengths (keeping the resolution on the edge of the detector fixed) leads to a net reduction of background/spot.  However, if you carefully calculate the area occupied by a spot, you again find that the noise due to background balances out, and there is again no wavelength dependence.  Lots of people have made that mistake.  Including me!  But eventually I found the error.  Some assurance can be hand that the "no wavelength dependence" conclusion is correct because experimental studies (http://dx.doi.org/10.1107/S0907444993013101), also found no significant wavelength dependence to "signal/noise/dose", as expected.

This is not to say that moving the detector back at constant wavelength is not a good idea.  It is!  You will generally get a signal/noise increase proportional to the distance (for weak spots).  And yes, this is why we spend so much money on large-area detectors!

Of course, the wavelength dependence of detector sensitivity is a completely different story.  For most theoretical calculations you assume a perfect detector system where the only noise is photon counting (also called shot noise).  It is important to remember that no such detectors actually exist.  Even Pilatus has some calibration error, pile-up error, etc. as well as a finite "capture fraction".  In fact, pretty much any modern detector is designed to capture only 80-90% of the incident photons at most wavelengths.  I could go on and on, but since the OP was only asking about "1.0 A vs 0.9 A", the change in detector performance over such a narrow range will be negligible when compared to things like crystal-to-crystal variation.  Did you know that a 110 micron crystal is twice the volume of a 90 micron crystal?  And therefore can absorb twice as much energy before enduring the same dose?

The only other "wavelength dependence" that could be of practical importance is the escape of photoelectrons from the illuminated volume because these can carry away some energy that would otherwise cause damage.  This "build up region" effect has long been a trick of medical dosimetry using MeV-class photons (Johns & Cunningham, 1974).  It was only recently demonstrated experimentally on an MX beamline 10.1073/pnas.1017701108.  It may be possible to take advantage of this effect in a "real-world" data collection, but any real gain will require crystal volumes so small that you cannot get a complete dataset from just one.  That is, unless you are working with VERY small molecules, you will need to be in the "multi-crystal dataset regime" before you can take advantage of photoelectron escape.

So, for any "regular" native data collection, I'd say: no, wavelength doesn't matter.

-James Holton
MAD Scientist


Crystal Structures as Snapshots

From: Jacob Keller
Date: 10 February 2012 20:25


Dear Crystallographers,

I am looking for references which discuss the validity of the
assertion that multiple crystal structures of the same or similar
proteins can be considered freeze-frame snapshots of actual
conformations assumed in solution. In a way, the assertion seems
almost definitely true to me, but on the other hand, I could imagine
some objections as well. Seems there should be some classic literature
here...

All the best,

Jacob




----------
From: James Stroud


How could they not be snapshots of conformations adopted in solution?

James

----------
From: Nat Echols

On Fri, Feb 10, 2012 at 12:29 PM, James Stroud <xtald00d@gmail.com> wrote:
> How could they not be snapshots of conformations adopted in solution?

Packing billions of copies of an irregularly-shaped protein into a
compact lattice and freezing it to 100K isn't necessarily
representative of "solution", especially when your solution contains
non-physiological amounts of salt and various organics (and possibly
non-physiological pH too).

-Nat

----------
From: Jacob Keller


> How could they not be snapshots of conformations adopted in solution?

Let me clarify--sorry about that. Consider several structures of the
same protein solved under different conditions, or several homologs
solved under similar conditions, or both. Further, let's say some
structural element, perhaps a helix, assumes different mannerisms in
the various structures. Can I reasonably assert that these structures
are snapshots of the protein which would have existed under
physiological conditions, and assemble the structures to a unifying
conception of the helical motion, or must I assume these are artifacts
of bizarre solution conditions, and one has nothing to do with the
other? Or perhaps every case/protein is unique, in which case no
general rule can be followed, amounting approximately to the same
thing?

Jacob

----------
From: Robert Immormino


Hi Jacob,

Lorena Beese has a few systems where snapshots of reaction mechanisms
have been looked at structurally.

Here are two such papers:

Long, SB, Casey, P., Beese, LS (2002) The reaction path of protein
farnesyltransferase at atomic resolution. Nature Oct 10;
419(6907):645-50.
http://www.ncbi.nlm.nih.gov/pubmed?term=The%20reaction%20path%20of%20protein%20farnesyltransferase%20at%20atomic%20resolution

J. R. Kiefer, C. Mao, J. C. Braman and L. S. Beese (1998) "Visualizing
DNA replication in a catalytically active Bacillus DNA polymerase
crystal" Nature 6664:304-7.
http://www.ncbi.nlm.nih.gov/pubmed?term=Visualizing%20DNA%20replication%20in%20a%20catalytically%20active%20Bacillus%20DNA%20polymerase%20crystal

Cheers,
-bob

----------
From: David Schuller


On 02/10/2012 03:25 PM, Jacob Keller wrote:
Dear Crystallographers,

I am looking for references which discuss the validity of the
assertion that multiple crystal structures of the same or similar
proteins can be considered freeze-frame snapshots of actual
conformations assumed in solution. In a way, the assertion seems
almost definitely true to me, but on the other hand, I could imagine
some objections as well. Seems there should be some classic literature
here...

How could that possibly be the case when any structure is an average of all the unit cells of the crystal over the timespan of the diffraction experiment?




----------
From: Roger Rowlett


I believe the most justifiable assumption one can make is that crystal structures are likely to represent the least soluble conformations of a protein under the conditions of crystallization (which might be a broad range of conditions, including physiological). This can be quite vexing if you are studying an allosteric protein and one of the two conformations is typically much less soluble than the other. BTDT. I'm sure others have had the same experience.

Having said that, the solvent content of protein crystals (which is close to that of cellular conditions), the observation of enzymatic activity in many protein crystals, and the *general* concordance of XRD and NMR structures of proteins (when both have been determined) leads one to believe that XRD structures are likely representative of physiologically relevant conformations.

Cheers,


----------
From: Jacob Keller


Interesting to juxtapose these two responses:

James Stroud:
>How could they not be snapshots of conformations adopted in solution?

David Schuller:
> How could that possibly be the case when any structure is an average of all
> the unit cells of the crystal over the timespan of the diffraction
> experiment?

JPK

----------
From: James Stroud


So the implication is that some of these treatments might allow the protein to overcome energetic barriers that are prohibitive in solution--after the protein is already in the solid state and not in solution any more?

Another view is that crystallization is a result of stabilizing conformations that are accessible in solution.

On the point of physiological relevance, it wasn't mentioned in the original question.

James

----------
From: George


>Packing billions of copies into a compact lattice
Not so compact there is 40-80% water
>freezing it to 100K
We have frozen many times protein solutions in liquid nitrogen and then thaw
and were working OK
> non-physiological amounts of salt and various organics
What is the amount of salt and osmotic pressure in the cell??
>non-physiological pH too
What is the non-physiological pH too? I am sure that some enzymes they are
not working in pH 7. Also most of the proteins they have crystallized in pH
close to 7 so I would not say non-physiological.

George

PS There are lots of solution NMR structures as well supporting the
physiological crystal structures

----------
From: James Stroud

The contrast seems to boil down to the semantics of the word "snapshot".

In my definition, I assume that the uncertainty of a structure is an intrinsic quality of the structure and thus included in the meaning of "snapshot". Part of that uncertainty comes from averaging.

James

----------
From: Ethan Merritt


On Friday, February 10, 2012 12:51:03 pm Jacob Keller wrote:
> Interesting to juxtapose these two responses:
>
> James Stroud:
> >How could they not be snapshots of conformations adopted in solution?
>
> David Schuller:
> > How could that possibly be the case when any structure is an average of all
> > the unit cells of the crystal over the timespan of the diffraction
> > experiment?

This pair of perspectives is the starting point for the introductory
rationale I usually present for TLSMD analysis.

The crystal structure is a snapshot, but just like a photographic snapshot
it contains blurry parts where the camera has captured a superposition
of microconformations.  When you photograph an object in motion, those
microconformations correspond to a trajectory purely along time.
In a crystallographic experiment, the microconformations correspond
to samples from a trajectory in solution.  Separation in time has
been transformed into separation in space (from one unit cell to
another).  A TLSMD model tries to reproduce the observed blurring by
modeling it a samples from a trajectory described by TLS displacement.

The issue of averaging over the timespan of the diffraction experiment
is relevant primarily to individual atomic vibrations, not so much to
what we normally mean by "conformations" of overall protein structure.

       Ethan


--
Ethan A Merritt



----------
From: Nat Echols


Just to clarify - I actually think the original assumption that Jacob
posted is generally reasonable.  But it needn't necessarily follow
that the conformation we see in crystal structures is always
representative of the solution state; given the extreme range of
conditions in which crystals grow, I would be surprised if there
weren't counter-examples.  I'm not familiar enough with the literature
on domain swapping (e.g. diptheria toxin) to know if any of those
structures are crystal packing artifacts.

----------
From: Damian Ekiert

Along the lines of Roger's second point, there was a very nice paper a few years back that found very good agreement between the conformational ensemble sampled by ubiquitin in solution (by NMR) with the ensemble of conformations observed in a large number of crystal structures:

Lange OF, Lakomek NA, Farès C, Schröder GF, Walter KF, Becker S, Meiler J, Grubmüller H, Griesinger C, de Groot BL.
Recognition dynamics up to microseconds revealed from an RDC-derived ubiquitin ensemble in solution.
Science. 2008 Jun 13;320(5882):1471-5. PubMed PMID: 18556554.

Best,

Damian Ekiert

----------
From: Jacob Keller

Isn't calcium-calmodulin one of the archetypical examples of the
crystal structure probably not representing the solution structure
(perhaps because the crystallization pH = 4.5)? Look at that linker
helix--how stable can that be in solution? I don't think a single one
of the NMR ca-calmodulin structures/conformers has the central helix
like that.

Jacob

----------
From: Jon Agirre

Hi Nat,

there are a number of viruses in which a domain swap occurs inside the capsid, with the hinge sequence being highly conserved among their respective families. Perhaps I'm missing your point, but I won't attribute that kind of domain swap to any sort of crystal packing artifact.

Jon
--
Dr. Jon Agirre

----------
From: Mark Wilson


Hi Jacob,
For Ca2+-CaM, and flexible proteins in general, the average conformation in solution may differ from the most crystallizable conformation.  However, any crystallized conformation had to be sampled in solution at some point in order to form a crystal, and thus the crystal structure tells us something about the range of conformations accessible to the protein under the crystallization conditions.  In Ca2+-CaM, the presence of MPD is probably more responsible for the continuous central helix than the pH, but early analysis of the thermal factors in that region of the crystal structure predicted flexibility in the center of this helix that was subsequently observed by NMR to be  a flexible linker region.  More generally, I'd argue that crystal disorder is a subset of solution motion: i.e. disorder observed in crystalline protein almost certainly corresponds to motions that occur in solution (perhaps with altered amplitude), but not all solution motions are observed as disorder in the crystal.
Best regards,
Mark


Mark A. Wilson






----------
From: Zhijie Li


Hi,

There is a interesting paper/tool that might shed a little light on the
debate here:

The paper: http://www.ncbi.nlm.nih.gov/pubmed/19956261
The tool:
http://ucxray.berkeley.edu/ringer/Documentation/ringerManual.htm#Utility


As I remember, this tool claimed to be able to extract information about the
subtle or "hidden" movements of side chains of an enzyme from high-res
crystallographic data.

One thing to note:  the authors collected the dataset used in the Nature
paper at RT. However their online manual said they also analyzed 402 hi-res
structure in PDB (all kinds of growth conditions apparently, and most, if
not all, were probably collected under cryo stream) and found an abundance
of alternative side chain conformations. Are all these alternative
conformations relevant to the proteins' native states under physiological
conditions? I guess it must be case-by-case.

JPK: you might find something you are looking for in the nature paper's
reference. The part mentioning the myoglobin and RNase work seems promising.
Good luck.

Zhijie



----------
From: Joel Sussman


2012_02_11 
Dear All,
Two really striking examples of "Intrinsically Flexible Proteins" are:

(1) Adenylate kinase: Vonrhein, Schlauderer & Schulz (1995) Structure 3, 483 
"Movie of the structural changes during a catalytic cycle of nucleoside monophosphate kinases"
in particular look at:
"video as MPEG white background, closing & opening enzyme (707kb)"
Each "black dot" [upper left, in the morph] indicates an observed crystal structure.

(2) Lac repressor: see Proteopedia page on lac repressor, 
morphing from the structure bound to its cognate DNA, to that of the structure bound to its the non-cognate DNA,

best regards,
Joel

----------
From: Poul Nissen


Another good lesson here:

2.
Vestergaard B, Sanyal S, Roessle M, Mora L, Buckingham RH, Kastrup JS, Gajhede M, Svergun DI, Ehrenberg M.
Mol Cell. 2005 Dec 22;20(6):929-38.


----------
From: Nian Huang


Is it possible the solution structure of SAXS, NMR and EM neglect the existence of a very small percentage conformation of the molecule due to the overwhelming signals from the majority conformations? But this state of the molecule is trapped and enriched by the crystallization condition.

Nian


Modified and unmodified residue

From: Christopher Wanty
Date: 21 February 2012 07:13


Hello,

I have a structure with a phosphorylated residue, but it looks like the residue may not be completely phosphorylated. I've currently modelled the residue purely as the phosphorylated variant, and have now been trying to put in a second conformer for the unmodified residue, but I'm finding the programs don't like two different residues with the same number.

What is the best way to tackle this? Should I go back to normal residue designation and then make a link to the ligand? Does anyone have a link to help me do this?

Thanks,
Chris



----------
From: <Herman.Schreuder

Dear Chris,
 
Did you specify your residues in the pdb as alternative conformation, e.g. name your conformers in the pdb ASER and BSEP? For buster and ligands this works for me and I would expect this to work for other programs as well.
 
Best,
Herman