From: Gerard Bricogne <gb10@globalphasing.com>
Date: 27 October 2011 11:14
Dear all,
In response to my message below, posted yesterday afternoon (GMT+1) I
received an off-list message from John Helliwell asking me to clarify what I
had meant by "already stored information" when referring to the outline of a
feasibility study envisaged by the IUCr Forum. I answered him, with Cc to
Tom Terwilliger and Brian McMahon. Tom then wrote to me to suggest that I
post my reply to this BB, and John agreed to it - so I include it below.
I hope it will serve to illustrate the possibility of undertaking
something concrete right away. It was written hurriedly, and is not meant to
be a bullet-proof proposal.
With best wishes,
Gerard.
===========================================================================
Dear John,
Thank you for this message off-list.
What I was referring to is the combination of two ideas that emerged
during our discussions in Madrid as the basis for a pilot project - not yet
a final solution:
(1) the identification of every dataset collected on a synchrotron and
stored locally could be implemented by assigning it a unique doi - that was
discussed with Alun, the chap from the Australian synchrotron, Brian, John
Westbrook etc. at the meeting;
(2) since the synchrotrons already store all their user data for some
time, and as several of them intend to make them publicly available after a
statutory period, it seemed a rather minor and inexpensive extra step to
promote the small fraction of datasets actually associated with a
publication to on-line storage, still at synchrotron sources, for the
purpose of making them accessible to the public as a sort of extension of
the present structure factor file.
This initial experiment would therefore involve "federating" the local
storage facilities of a few synchrotrons interested in participating in this
pilot project (or feasibility study), with a special treatment for those
datasets, but nothing that would be inordinately onerous in terms of extra
costs. The relevant doi would then appear in the pdb entry and would enable
interested users to access the raw images for their own purposes. The public
disclosure of image sets would also be at the option of the authors of a pdb
entry, not compulsory at this stage.
Starting things in this way would enable a number of critical
procedures to be tested and of key parameters to be estimated, and in
particular to start storing those datasets before that storage can be passed
on to a more central entity, and before deposition is made compulsory by the
journals' own policies.
I hope this is not too confused a picture. Its main ingredients are
1. a number of synchrotrons willing to put in the extra resources into
assigning those unique doi's to datasets collected there, which they archive
anyway, and into making those that are eventually associated with a
publication accessible online in a reasonably interactive manner; this is
not a description of the void set, as both the Australian synchrotron and
Diamond expressed interest;
2. a fraction of the creators of MX pdb entries willing to have the
images associated to a pdb entry made available to the public in this way;
3. a participation of the PDB and the IUCr in the supervision, or even
coordination, of this pilot project, with a view to examining the logistics
involved and coming up with realistic, "evidence-based" estimates of costs
and benefits, as the basis for subsequently making the hard decisions about
gathering the necessary resources to implement the best form of archiving
(central vs. federated), and deciding whether to make the deposition of raw
data compulsory or not.
The expression "evidence-based" is crucial here: without starting to do
something on a small scale with a subset of willing partners, we will
continue to discuss things "in abstracto" for ever, as has been happening in
some of the contributions to this thread. The true costs will only be known
by actually doing something, and this is even more true about the benefits,
i.e. some developers showing that re-exploiting the raw images with new
processing approaches can deliver better results than those originally
deposited. With such evidence, getting the extra support needed would be
much more likely than if it was applied for in the present situation, where
only opinions are available. So the "let's do it" argument seems to me
irresistible.
I hope, of course, that I haven't misinterpreted nor overinterpreted
what was discussed in Madrid. It was a pity that so many of the influential
people with various commission-related titles were only able to stay for the
beginning of that meeting, that was quite formal and lacking in any form of
ground-breaking discussion. I can guarantee that I was *increasingly awake*
as the meeting proceeded, and that the principles I outlined above were
actually discussed, even if some of them might have been only partially
recorded. I do dream about these matters at night, but I don't think that I
hallucinate (yet).
I hope, finally, that this goes some way towards providing the
clarification you wanted.
With best wishes,
Gerard.
--
On Wed, Oct 26, 2011 at 03:10:21PM +0000, John Helliwell wrote:
> Dear Gerard,
> Thankyou for your CCP4bb detailed message of today, <snip>
>
> But I have to come off line re :-
> The IUCr Forum already has an outline of a feasibility study that would cost only a small amount of
> joined-up thinking and book-keeping around already stored information, so let us not use the inaccessibility of federal or EC funding as a scarecrow to justify not even trying what is proposed there.
>
> and especially:-
> already stored information
>
> Where exactly is this existing store to which you refer?
>
>
> The PDB is out it seems to take on this role for the MX community so it leaves us with convincing the Journals. I have started with IUCr Journals and have submitted a Business Plan, already a month ago.
> It has not instantly led to 'lets do it' but I remain hopeful that that doesn't mean 'No'.
>
> I copy in Tom and Brian.
>
> In anticipation of your clarification.
>
> Thankyou.
> Yours sincerely,
> John
> Prof John R Helliwell DSc
===========================================================================
(Below, the message John was referring to in his request for clarification)
--
Date: Wed, 26 Oct 2011 15:29:32 +0100
From: Gerard Bricogne
Subject: Re: [ccp4bb] IUCr committees, depositing images
Dear John and colleagues,
There seem to be a set a centrifugal forces at play within this thread
that are distracting us from a sensible path of concrete action by throwing
decoys in every conceivable direction, e.g.
* "Pilatus detectors spew out such a volume of data that we can't
possibly archive it all" - does that mean that because the 5th generation of
Dectris detectors will be able to write one billion images a second and
catch every scattered photon individually, we should not try and archive
more information than is given by the current merged structure factor data?
That seems a complete failure of reasoning to me: there must be a sensible
form of raw data archiving that would stand between those two extremes and
would retain much more information that the current merged data but would
step back from the enormous degree of oversampling of the raw diffraction
pattern that the Pilatus and its successors are capable of.
* "It is all going to cost an awful lot of money, therefore we need a
team of grant writers to raise its hand and volunteer to apply for resources
from one or more funding agencies" - there again there is an avoidance of
the feasible by invocation of the impossible. The IUCr Forum already has an
outline of a feasibility study that would cost only a small amount of
joined-up thinking and book-keeping around already stored information, so
let us not use the inaccessibility of federal or EC funding as a scarecrow
to justify not even trying what is proposed there. And the idea that someone
needs to decide to stake his/her career on this undertaking seems totally
overblown.
Several people have already pointed out that the sets of images that
would need to be archived would be a very small subset of the bulk of
datasets that are being held on the storage systems of synchrotron sources.
What needs to be done, as already described, is to be able to refer to those
few datasets that gave rise to the integrated data against which deposited
structures were refined (or, in some cases, solved by experimental phasing),
to give them special status in terms of making them visible and accessible
on-line at the same time as the pdb entry itself (rather than after the
statutory 2-5 years that would apply to all the rest, probably in a more
off-line form), and to maintain that accessibility "for ever", with a link
from the pdb entry and perhaps from the associated publication. It seems
unlikely that this would involve the mobilisation of such large resources as
to require either a human sacrifice (of the poor person whose life would be
staked on this gamble) or writing a grant application, with the indefinite
postponement of action and the loss of motivation this would imply.
Coming back to the more technical issue of bloated datasets, it is a
scientific problem that must be amenable to rational analysis to decide on a
sensible form of compression of overly-verbose sets of thin-sliced, perhaps
low-exposure images that would already retain a large fraction, if not all,
of the extra information on which we would wish future improved versions of
processing programs to cut their teeth, for a long time to come. This
approach would seem preferable to stoking up irrational fears of not being
able to cope with the most exaggerated predictions of the volumes of data to
archive, and thus doing nothing at all.
I very much hope that the "can do" spirit that marked the final
discussions of the DDDWG (Diffraction Data Deposition Working Group) in
Madrid will emerge on top of all the counter-arguments that consist in
moving the goal posts to prove that the initial goal is unreachable.
With best wishes,
Gerard.
--
On Wed, Oct 26, 2011 at 02:18:25PM +0100, John R Helliwell wrote:
> Dear Frank,
> re 'who will write the grant?'.
>
> This is not as easy as it sounds, would that it were!
>
> There are two possible business plans:-
> Option 1. Specifically for MX is the PDB as the first and foremost
> candidate to seek such additional funds for full diffraction data
> deposition for each future PDB deposiition entry. This business plan
> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
> answered this in the negative thus far at the CCP4 January 2010).
>
> Option 2 The Journals that host the publications could add the cost to
> the subscriber and/or the author according to their funding model. As
> an example and as a start a draft business plan has been written by
> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
> of its simpler 'author pays' financing. This proposed business plan is
> now with IUCr Journals to digest and hopefully refine. Initial
> indications are that Acta Cryst C would be perceived by IUCr Journals
> as a better place to start considering this in detail, as it involves
> fewer crystal structures than Acta E and would thus be more
> manageable. The overall advantage of the responsibility being with
> Journals as we see it is that it encourages such 'archiving of data
> with literature' across all crystallography related techniques (single
> crystal, SAXS, SANS, Electron crystallography etc) and fields
> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
> just one technique and field, although obviously biology is dear to
> our hearts here in the CCP4bb.
>
> Yours sincerely,
> John and Tom
> John Helliwell and Tom Terwilliger
>
> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
> wrote:
> > Since when has the cost of any project been limited by the cost of
> > hardware? Someone has to implement this -- and make a career out of it;
> > thunderingly absent from this thread has been the chorus of volunteers who
> > will write the grant.
> > phx
> >
> >
> > On 25/10/2011 21:10, Herbert J. Bernstein wrote:
> >
> > To be fair to those concerned about cost, a more conservative estimate
> > from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
> > per terabyte per year for long term storage allowing for overhead in
> > moderate-sized institutions such as the PDB. Larger entities, such
> > as Google are able to do it for much lower annual costs in the range of
> > $100 to $300 per terabyte per year. Indeed, if this becomes a serious
> > effort, one might wish to consider involving the large storage farm
> > businesses such as Google and Amazon. They might be willing to help
> > support science partially in exchange for eyeballs going to their sites.
> >
> > Regards,
> > H. J. Bernstein
> >
> > At 1:56 PM -0600 10/25/11, James Stroud wrote:
> >
> > On Oct 24, 2011, at 3:56 PM, James Holton wrote:
> >
> > The PDB only gets about 8000 depositions per year
> >
> > Just to put this into dollars. If each dataset is about 17 GB in
> > size, then that's about 14 TB of storage that needs to come online
> > every year to store the raw data for every structure. A two second
> > search reveals that Newegg has a 3GB hitachi for $200. So that's
> > about $1000 / year of storage for the raw data behind PDB deposits.
> >
> > James
> >
> >
>
> --
> Professor John R Helliwell DSc
--
===============================================================
* *
* Gerard Bricogne *
* *
* Global Phasing Ltd. *
* Sheraton House, Castle Park
* Cambridge CB3 0AX, UK
* *
===============================================================
----------
From: John R Helliwell
Date: 27 October 2011 14:07
Dear Gerard,
Thankyou indeed for this clear and already detailed plan. As well as
for MX SR datasets, ~75% of the total, this plan can readily be
extended to eg SR SAXS and SANS.
For the benefit of CCP4bb participants just to mention that the IUCr
DDD WG is seeking 'can do' solutions across a broad spread; we have
already put two in place. Firstly the IUCr Commissions are supplying
exemplar datasets and associated metadata, and this is happening; this
will no doubt involve some communities getting clarity and agreement
re the data and metadata required. Secondly, we are looking with IUCr
Journals into whether archiving of data with the literature might be
feasible e.g. for chemical crystallography. Nb chemical
crystallography is a huge field and the largest fraction of IUCr
Journals by publications number. The 'synchrotrons administering doi
plan' largely does not satisfy this major need as the largest fraction
of these datasets are measured in house. However University
institutional repositories do host data sets and I will check with the
University of Manchester one if there is a size limit because this
would again avoiding moving the data around unduly. The Journals would
then 'only' have to deal with cases of universities where there is no
such institutional repository.
Re your suggestion that there is a 'Moving goalposts' afoot eg due to
the pixel detector; let's be clear, the IUCr DDD WG and myself in
particular already know about, and have used, the pixel detector, and
indeed the Working Group is not called the Diffraction Data Images
Working Group! The principle of 'archiving data with literature' is
the key principle, whatever device is used in the measurements. This
principle is an immoveable goalpost.
Greetings,
Date: 27 October 2011 11:14
Dear all,
In response to my message below, posted yesterday afternoon (GMT+1) I
received an off-list message from John Helliwell asking me to clarify what I
had meant by "already stored information" when referring to the outline of a
feasibility study envisaged by the IUCr Forum. I answered him, with Cc to
Tom Terwilliger and Brian McMahon. Tom then wrote to me to suggest that I
post my reply to this BB, and John agreed to it - so I include it below.
I hope it will serve to illustrate the possibility of undertaking
something concrete right away. It was written hurriedly, and is not meant to
be a bullet-proof proposal.
With best wishes,
Gerard.
===========================================================================
Dear John,
Thank you for this message off-list.
What I was referring to is the combination of two ideas that emerged
during our discussions in Madrid as the basis for a pilot project - not yet
a final solution:
(1) the identification of every dataset collected on a synchrotron and
stored locally could be implemented by assigning it a unique doi - that was
discussed with Alun, the chap from the Australian synchrotron, Brian, John
Westbrook etc. at the meeting;
(2) since the synchrotrons already store all their user data for some
time, and as several of them intend to make them publicly available after a
statutory period, it seemed a rather minor and inexpensive extra step to
promote the small fraction of datasets actually associated with a
publication to on-line storage, still at synchrotron sources, for the
purpose of making them accessible to the public as a sort of extension of
the present structure factor file.
This initial experiment would therefore involve "federating" the local
storage facilities of a few synchrotrons interested in participating in this
pilot project (or feasibility study), with a special treatment for those
datasets, but nothing that would be inordinately onerous in terms of extra
costs. The relevant doi would then appear in the pdb entry and would enable
interested users to access the raw images for their own purposes. The public
disclosure of image sets would also be at the option of the authors of a pdb
entry, not compulsory at this stage.
Starting things in this way would enable a number of critical
procedures to be tested and of key parameters to be estimated, and in
particular to start storing those datasets before that storage can be passed
on to a more central entity, and before deposition is made compulsory by the
journals' own policies.
I hope this is not too confused a picture. Its main ingredients are
1. a number of synchrotrons willing to put in the extra resources into
assigning those unique doi's to datasets collected there, which they archive
anyway, and into making those that are eventually associated with a
publication accessible online in a reasonably interactive manner; this is
not a description of the void set, as both the Australian synchrotron and
Diamond expressed interest;
2. a fraction of the creators of MX pdb entries willing to have the
images associated to a pdb entry made available to the public in this way;
3. a participation of the PDB and the IUCr in the supervision, or even
coordination, of this pilot project, with a view to examining the logistics
involved and coming up with realistic, "evidence-based" estimates of costs
and benefits, as the basis for subsequently making the hard decisions about
gathering the necessary resources to implement the best form of archiving
(central vs. federated), and deciding whether to make the deposition of raw
data compulsory or not.
The expression "evidence-based" is crucial here: without starting to do
something on a small scale with a subset of willing partners, we will
continue to discuss things "in abstracto" for ever, as has been happening in
some of the contributions to this thread. The true costs will only be known
by actually doing something, and this is even more true about the benefits,
i.e. some developers showing that re-exploiting the raw images with new
processing approaches can deliver better results than those originally
deposited. With such evidence, getting the extra support needed would be
much more likely than if it was applied for in the present situation, where
only opinions are available. So the "let's do it" argument seems to me
irresistible.
I hope, of course, that I haven't misinterpreted nor overinterpreted
what was discussed in Madrid. It was a pity that so many of the influential
people with various commission-related titles were only able to stay for the
beginning of that meeting, that was quite formal and lacking in any form of
ground-breaking discussion. I can guarantee that I was *increasingly awake*
as the meeting proceeded, and that the principles I outlined above were
actually discussed, even if some of them might have been only partially
recorded. I do dream about these matters at night, but I don't think that I
hallucinate (yet).
I hope, finally, that this goes some way towards providing the
clarification you wanted.
With best wishes,
Gerard.
--
On Wed, Oct 26, 2011 at 03:10:21PM +0000, John Helliwell wrote:
> Dear Gerard,
> Thankyou for your CCP4bb detailed message of today, <snip>
>
> But I have to come off line re :-
> The IUCr Forum already has an outline of a feasibility study that would cost only a small amount of
> joined-up thinking and book-keeping around already stored information, so let us not use the inaccessibility of federal or EC funding as a scarecrow to justify not even trying what is proposed there.
>
> and especially:-
> already stored information
>
> Where exactly is this existing store to which you refer?
>
>
> The PDB is out it seems to take on this role for the MX community so it leaves us with convincing the Journals. I have started with IUCr Journals and have submitted a Business Plan, already a month ago.
> It has not instantly led to 'lets do it' but I remain hopeful that that doesn't mean 'No'.
>
> I copy in Tom and Brian.
>
> In anticipation of your clarification.
>
> Thankyou.
> Yours sincerely,
> John
> Prof John R Helliwell DSc
===========================================================================
(Below, the message John was referring to in his request for clarification)
--
Date: Wed, 26 Oct 2011 15:29:32 +0100
From: Gerard Bricogne
Subject: Re: [ccp4bb] IUCr committees, depositing images
Dear John and colleagues,
There seem to be a set a centrifugal forces at play within this thread
that are distracting us from a sensible path of concrete action by throwing
decoys in every conceivable direction, e.g.
* "Pilatus detectors spew out such a volume of data that we can't
possibly archive it all" - does that mean that because the 5th generation of
Dectris detectors will be able to write one billion images a second and
catch every scattered photon individually, we should not try and archive
more information than is given by the current merged structure factor data?
That seems a complete failure of reasoning to me: there must be a sensible
form of raw data archiving that would stand between those two extremes and
would retain much more information that the current merged data but would
step back from the enormous degree of oversampling of the raw diffraction
pattern that the Pilatus and its successors are capable of.
* "It is all going to cost an awful lot of money, therefore we need a
team of grant writers to raise its hand and volunteer to apply for resources
from one or more funding agencies" - there again there is an avoidance of
the feasible by invocation of the impossible. The IUCr Forum already has an
outline of a feasibility study that would cost only a small amount of
joined-up thinking and book-keeping around already stored information, so
let us not use the inaccessibility of federal or EC funding as a scarecrow
to justify not even trying what is proposed there. And the idea that someone
needs to decide to stake his/her career on this undertaking seems totally
overblown.
Several people have already pointed out that the sets of images that
would need to be archived would be a very small subset of the bulk of
datasets that are being held on the storage systems of synchrotron sources.
What needs to be done, as already described, is to be able to refer to those
few datasets that gave rise to the integrated data against which deposited
structures were refined (or, in some cases, solved by experimental phasing),
to give them special status in terms of making them visible and accessible
on-line at the same time as the pdb entry itself (rather than after the
statutory 2-5 years that would apply to all the rest, probably in a more
off-line form), and to maintain that accessibility "for ever", with a link
from the pdb entry and perhaps from the associated publication. It seems
unlikely that this would involve the mobilisation of such large resources as
to require either a human sacrifice (of the poor person whose life would be
staked on this gamble) or writing a grant application, with the indefinite
postponement of action and the loss of motivation this would imply.
Coming back to the more technical issue of bloated datasets, it is a
scientific problem that must be amenable to rational analysis to decide on a
sensible form of compression of overly-verbose sets of thin-sliced, perhaps
low-exposure images that would already retain a large fraction, if not all,
of the extra information on which we would wish future improved versions of
processing programs to cut their teeth, for a long time to come. This
approach would seem preferable to stoking up irrational fears of not being
able to cope with the most exaggerated predictions of the volumes of data to
archive, and thus doing nothing at all.
I very much hope that the "can do" spirit that marked the final
discussions of the DDDWG (Diffraction Data Deposition Working Group) in
Madrid will emerge on top of all the counter-arguments that consist in
moving the goal posts to prove that the initial goal is unreachable.
With best wishes,
Gerard.
--
On Wed, Oct 26, 2011 at 02:18:25PM +0100, John R Helliwell wrote:
> Dear Frank,
> re 'who will write the grant?'.
>
> This is not as easy as it sounds, would that it were!
>
> There are two possible business plans:-
> Option 1. Specifically for MX is the PDB as the first and foremost
> candidate to seek such additional funds for full diffraction data
> deposition for each future PDB deposiition entry. This business plan
> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
> answered this in the negative thus far at the CCP4 January 2010).
>
> Option 2 The Journals that host the publications could add the cost to
> the subscriber and/or the author according to their funding model. As
> an example and as a start a draft business plan has been written by
> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
> of its simpler 'author pays' financing. This proposed business plan is
> now with IUCr Journals to digest and hopefully refine. Initial
> indications are that Acta Cryst C would be perceived by IUCr Journals
> as a better place to start considering this in detail, as it involves
> fewer crystal structures than Acta E and would thus be more
> manageable. The overall advantage of the responsibility being with
> Journals as we see it is that it encourages such 'archiving of data
> with literature' across all crystallography related techniques (single
> crystal, SAXS, SANS, Electron crystallography etc) and fields
> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
> just one technique and field, although obviously biology is dear to
> our hearts here in the CCP4bb.
>
> Yours sincerely,
> John and Tom
> John Helliwell and Tom Terwilliger
>
> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
> wrote:
> > Since when has the cost of any project been limited by the cost of
> > hardware? Someone has to implement this -- and make a career out of it;
> > thunderingly absent from this thread has been the chorus of volunteers who
> > will write the grant.
> > phx
> >
> >
> > On 25/10/2011 21:10, Herbert J. Bernstein wrote:
> >
> > To be fair to those concerned about cost, a more conservative estimate
> > from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
> > per terabyte per year for long term storage allowing for overhead in
> > moderate-sized institutions such as the PDB. Larger entities, such
> > as Google are able to do it for much lower annual costs in the range of
> > $100 to $300 per terabyte per year. Indeed, if this becomes a serious
> > effort, one might wish to consider involving the large storage farm
> > businesses such as Google and Amazon. They might be willing to help
> > support science partially in exchange for eyeballs going to their sites.
> >
> > Regards,
> > H. J. Bernstein
> >
> > At 1:56 PM -0600 10/25/11, James Stroud wrote:
> >
> > On Oct 24, 2011, at 3:56 PM, James Holton wrote:
> >
> > The PDB only gets about 8000 depositions per year
> >
> > Just to put this into dollars. If each dataset is about 17 GB in
> > size, then that's about 14 TB of storage that needs to come online
> > every year to store the raw data for every structure. A two second
> > search reveals that Newegg has a 3GB hitachi for $200. So that's
> > about $1000 / year of storage for the raw data behind PDB deposits.
> >
> > James
> >
> >
>
> --
> Professor John R Helliwell DSc
--
===============================================================
* *
* Gerard Bricogne *
* *
* Global Phasing Ltd. *
* Sheraton House, Castle Park
* Cambridge CB3 0AX, UK
* *
===============================================================
----------
From: John R Helliwell
Date: 27 October 2011 14:07
Dear Gerard,
Thankyou indeed for this clear and already detailed plan. As well as
for MX SR datasets, ~75% of the total, this plan can readily be
extended to eg SR SAXS and SANS.
For the benefit of CCP4bb participants just to mention that the IUCr
DDD WG is seeking 'can do' solutions across a broad spread; we have
already put two in place. Firstly the IUCr Commissions are supplying
exemplar datasets and associated metadata, and this is happening; this
will no doubt involve some communities getting clarity and agreement
re the data and metadata required. Secondly, we are looking with IUCr
Journals into whether archiving of data with the literature might be
feasible e.g. for chemical crystallography. Nb chemical
crystallography is a huge field and the largest fraction of IUCr
Journals by publications number. The 'synchrotrons administering doi
plan' largely does not satisfy this major need as the largest fraction
of these datasets are measured in house. However University
institutional repositories do host data sets and I will check with the
University of Manchester one if there is a size limit because this
would again avoiding moving the data around unduly. The Journals would
then 'only' have to deal with cases of universities where there is no
such institutional repository.
Re your suggestion that there is a 'Moving goalposts' afoot eg due to
the pixel detector; let's be clear, the IUCr DDD WG and myself in
particular already know about, and have used, the pixel detector, and
indeed the Working Group is not called the Diffraction Data Images
Working Group! The principle of 'archiving data with literature' is
the key principle, whatever device is used in the measurements. This
principle is an immoveable goalpost.
Greetings,
No comments:
Post a Comment