Of Stewardship and Speculation


Introduction

The four papers preceding this essay treated cryptocurrency as a financial and technological question: what it is, why it attracts fraud, what specific schemes circulate within it, and how a person who chooses to participate can do so with reasonable care. Those questions are real ones, and the papers attempted to address them honestly. But they are not, for the Christian, the deepest questions.

The deepest question is not whether one can participate prudently in cryptocurrency, but whether one should — and underneath that, the still deeper question of what money is for in the first place. A person who has mastered every operational discipline in the previous paper but has not asked these prior questions has prepared himself to navigate a particular market without having asked whether the market is one he ought to be navigating at all.

This essay attempts those prior questions. It does so from within a particular tradition — the historic Christian conviction that everything a man possesses is held in trust from the One who gave it, and that the use of those possessions is part of what he will answer for. The essay does not assume the reader shares that conviction, but it does not pretend to neutrality either. The argument that follows makes most sense within the framework from which it is offered.

The conclusion the essay reaches is not “do not participate in cryptocurrency.” It is more nuanced than that, and more demanding. The conclusion is that the question of participation is itself a smaller question than it appears, and that the larger question — what one’s money is for, and to Whom one will give account for its use — is the question that ought to govern not only this particular decision but every related one.

The Foundational Principle

The Scriptures begin not with the assertion that men own their possessions but with the assertion that they do not. “The earth is the Lord’s, and the fulness thereof; the world, and they that dwell therein,” the Psalmist declares, and the principle threads through both Testaments. The wealth a man holds is not, in the deepest sense, his own. It is entrusted to him for a season, for purposes that are not exhausted by his own comfort, and he will give account for what he did with it.

This is the doctrine of stewardship, and it is unfamiliar enough in modern thinking to require slowing down. The man who owns his property may do with it as he pleases, subject only to the law and his own preferences. The man who stewards his property holds it on behalf of another, and his decisions about it are answerable to that other’s purposes rather than to his own. The two postures look similar from outside — both men go about their business, transact, save, spend — but they are fundamentally different on the inside, and they produce different decisions over time.

The parable of the talents, given by the Lord Jesus Christ, is the most extended treatment of the principle in the New Testament. A man going on a journey entrusts his servants with portions of his estate, each according to his ability. The servants who put the entrusted funds to productive use are commended on the master’s return; the servant who buried his portion out of fear is rebuked. The lesson is not that the servants owned the talents — they did not — but that they were responsible for what they did with what was not theirs.

Three implications follow that bear directly on the present question.

First, the question “what should I do with my money?” is not, properly, the question of a Christian. The proper question is “what should I do with the resources entrusted to me?” The grammatical shift is small but the implications are large. The first question is answered by reference to one’s preferences; the second is answered by reference to the entruster’s purposes.

Second, the doctrine of stewardship does not entail asceticism. The master in the parable does not commend the servant for refusing to engage with the funds; he rebukes him for it. The servants who acted, who took risks, who put the resources to work are the ones who please him. A theology of stewardship is not a theology of inaction. It is a theology of action directed by purposes other than one’s own gratification.

Third, the standard by which the use of the entrusted resources is judged is not financial. The master in the parable rewards faithfulness, not return on investment. A servant who acted faithfully and lost would presumably have been treated differently than the one who acted faithlessly and lost; the text does not give us that case, but the entire framing suggests it. The question is not “how much did you gain?” but “what were you doing, and for whom?”

This is the framework within which the question of cryptocurrency, or of any other financial activity, must be situated for the Christian. The question is not whether the activity is permitted, or whether it is prudent in financial terms, but whether it is consistent with the stewardship to which the participant has been called.

Speculation Examined

With this framework in mind, we can ask a more focused question: what should the Christian make of speculation as such?

The first thing to recognize is that speculation is not, in itself, the same as gambling. The two are often conflated, and the conflation does not survive examination. A gambler stakes funds on the outcome of a contest whose result is unconnected to any productive activity; if he wins, he wins because someone else loses. A speculator stakes funds on the future value of a real asset whose productive use, or whose role in the economy, is the source of any value the speculation may capture. The farmer who plants a crop is speculating that the crop will be worth more at harvest than the cost of inputs. The investor who buys shares in a business is speculating that the business will produce real value over time. The Scriptures contain no condemnation of these activities, and the Lord’s parables repeatedly use them as positive examples of prudent action.

The second thing to recognize is that the line between productive speculation and unproductive speculation is not always easy to draw. The farmer’s wager on the crop is clearly productive; the value he hopes for comes from food that will be eaten. The shareholder’s wager on the business is mostly productive; the value comes from goods and services the business produces, though some of it may come from changes in market sentiment rather than from underlying performance. The day trader who buys and sells the same shares within minutes is engaged in something closer to gambling than to investment, because his gains and losses are decoupled from any productive activity and come entirely from the actions of other traders. The speculator who buys a token whose price depends entirely on the willingness of future buyers to pay more for it than current buyers paid is closer still to gambling, though not identical to it.

Cryptocurrency activity falls at various points along this spectrum. A person who buys Bitcoin because he believes it will retain value better than fiat currency over a long period — and who plans to hold it across the cycles — is engaged in something more like investment than speculation in the pejorative sense, though the asset’s volatility makes the activity more uncertain than buying durable productive assets. A person who trades cryptocurrencies actively, attempting to profit from short-term price movements, is engaged in something close to gambling, regardless of the technical sophistication of his analysis. A person who buys a small token shortly after its launch, hoping to sell it to a later buyer at a higher price, is engaged in something nearly indistinguishable from gambling.

The Christian framework does not produce a single verdict on speculative activity. It produces a set of questions: Is the activity productive, in the sense of being connected to real value being created somewhere? Is the participant exposed to risk in a way that is proportionate to his overall resources and obligations? Is the activity occupying a place in his life — in attention, in emotion, in time — that crowds out the things to which he owes greater attention? Would the participant be willing to give a frank account of the activity to his pastor, his spouse, and his children?

A speculation that survives these questions is one a Christian may, with care, engage in. A speculation that does not survive them is one he should not, regardless of how much money it might produce.

The Obligations That Come First

The doctrine of stewardship would be incomplete if it did not specify the purposes for which the entrusted resources are to be used. The Scriptures are not silent on this question, and the obligations they identify are demanding enough that, for most people in most circumstances, they substantially exhaust the resources available.

Family. The Apostle Paul writes that if any provide not for his own, and specially for those of his own house, he hath denied the faith, and is worse than an infidel. The provision in view includes the present needs of one’s spouse and children, but it extends beyond them. It includes the prudent preparation for foreseeable future needs — the children’s education, the spouse’s security in widowhood, one’s own care in old age — and the obligation to leave an inheritance, which the Proverbs describe as the practice of a good man. Resources that are committed to speculation are, by definition, not committed to these provisions, and the trade-off needs to be examined honestly. A man who speculates with funds that should have been securing his family’s future has not been bold; he has been negligent.

Neighbor. The Lord’s summary of the law places love of neighbor on a level with love of God, and the Apostle James warns against the religion that says to a brother in need “depart in peace, be ye warmed and filled” while withholding the means of meeting that need. The obligation of generosity to those in genuine need is not optional in the Christian life. It is, in the Scriptures, one of the chief purposes for which God grants surplus. A man who has accumulated resources beyond what his family requires has been given them, in part, to bless those who have less; resources that are committed to speculation are not, in that moment, available for that blessing.

The Work of the Kingdom. The financial support of the local assembly, of those who labor in the Word, of missionary work, and of the various forms of practical Christian charity is the third great call on the steward’s resources. The Apostle Paul instructs the Corinthians to lay by them in store as God hath prospered them, and the same principle threads through the whole of the New Testament. This obligation, like the others, is not exhausted by the tithe — the Scriptures do not bind the New Testament Christian to a specific percentage — but it is real and substantial, and it has the first claim on surplus after one’s family has been provided for.

The pattern that emerges from these obligations is not a prohibition on speculation but a placement of it. Speculation comes, if at all, after family is provided for, after the neighbor in need has been considered, after the work of the kingdom has been supported. It comes from the genuine surplus that remains after these prior claims have been met. And it comes in proportion to that surplus, not as a means of generating the resources that should already have been provided through diligent work.

This ordering is uncomfortable. It rules out, for most Christians, the use of significant resources for speculation, because most Christians do not have significant surplus beyond their prior obligations. It rules out, for many Christians, the use of any resources at all for speculation, because their obligations exceed what they can comfortably meet from their current income. The honesty of the framework requires acknowledging these implications rather than softening them.

A Christian who finds himself with no surplus available for speculation after his prior obligations have been met has not been unfortunate; he has been faithful. A Christian who has carved out resources for speculation by neglecting those obligations has not been bold; he has been disordered. The framework does not commend speculation as a path to wealth; it permits it, with caution, as one possible use of genuine surplus after the prior calls have been answered.

The Heart’s Susceptibility

The third element of the stewardship framework is the most uncomfortable, because it concerns not external obligations but the internal state of the steward.

The Apostle Paul warns Timothy that they that will be rich fall into temptation and a snare, and into many foolish and hurtful lusts, which drown men in destruction and perdition. The warning is not against being rich, which the Scriptures treat as a circumstance some are called to and others not, but against the desire to be rich — the orientation of the heart toward wealth as a goal. That orientation, Paul says, is destructive in itself, regardless of whether it succeeds in producing wealth.

The application of this warning to speculation is direct. A market that promises rapid gains is, by its nature, designed to engage the desire to be rich. The promise of multiplying one’s resources without corresponding labor speaks to something in the fallen heart that is not safe to feed. A person who enters such a market without examining what it is awakening in him is, in the apostolic warning, walking toward a snare.

This is not an argument that the desire for return on investment is sinful. The diligent are promised the reward of their labor throughout the Scriptures, and the wise investment of resources is commended rather than rebuked. The warning is more specific: the appetite for rapid gain, the orientation of the heart toward wealth as a primary good, the willingness to take risks one would not otherwise take for the sake of increase that one would not otherwise need — these are the marks of the spirit that Paul warns against, and they are precisely the marks that speculative markets are designed to elicit.

The Christian considering participation in cryptocurrency markets should examine himself with particular care on this point. Why does the activity attract him? Is it the technology, the underlying problems the technology addresses, the modest diversification of a long-term portfolio — or is it the prospect of multiplied wealth in a short time? The honest answer matters, because it indicates what the activity is doing in his soul. An activity engaged in for the former reasons may be conducted in the spirit of stewardship; an activity engaged in for the latter is conducted in the spirit Paul warns against, regardless of whether the participant has noticed.

A useful diagnostic question: would the participant be willing to commit to the activity for ten years without checking the price, simply because he believed the underlying thesis was sound? If yes, the engagement is closer to investment. If no — if the activity is meaningful only because of the prospect of frequent price checks and frequent emotional responses to them — the engagement is closer to the form Paul warns against, and prudence suggests withdrawing.

A second diagnostic: would the participant’s spiritual life be visibly better, or visibly worse, if he checked cryptocurrency prices once a year instead of once an hour? Most participants, examining themselves honestly, will find that the high-frequency engagement is doing them no spiritual good, regardless of what it is doing to their net worth. That finding is itself a kind of answer.

A Distinctive Witness in a Distinctive Market

A final consideration, less personal and more outward-facing, concerns the witness that a Christian’s financial behavior bears to those around him.

Cryptocurrency markets, more than most financial environments, are characterized by attitudes that the Scriptures explicitly reject. The covetousness that drives the desire for rapid wealth, the envy that fuels the comparison of one’s gains to others’, the pride that attaches identity to one’s holdings, the anxiety that follows from staking too much on too uncertain a thing — these are not incidental features of the culture but core ones. A Christian who participates in these markets in the same spirit as those around him bears witness to nothing distinctive; he is simply one more participant in a culture that contradicts the Gospel he claims to confess.

A Christian who participates differently bears a different witness. He does not chase the rallies that excite his neighbors. He does not panic at the crashes that demoralize them. His mood does not rise and fall with the price chart. His conversation does not center on his holdings. He gives generously regardless of whether the market is up or down. He keeps his commitments to his family and his local assembly regardless of what his portfolio is doing. He treats cryptocurrency the way he treats any other small allocation of surplus — soberly, with modest expectations, and without the emotional investment that the culture expects.

This kind of participation is rare enough in the field that it is itself a form of testimony. The neighbor or coworker who watches a Christian conduct himself this way is being shown, without any explicit preaching, that a different relationship to money is possible. The witness is not delivered in the participation but in the manner of it, and the manner is shaped not by the market but by the deeper convictions the participant carries into it.

This is not an argument that every Christian should participate in cryptocurrency in order to bear this witness. The witness is borne equally — perhaps more visibly — by the Christian who, having examined the question, declines to participate at all, and whose absence from the market reflects the priorities he has chosen. The witness is borne by the manner of one’s engagement with money, whatever the specific form of that engagement happens to be.

Conclusion

The question this essay set out to address — should a Christian participate in cryptocurrency? — turns out, on examination, to be the wrong question. The right question is: what are the resources entrusted to me for, and how does the use of any portion of them serve those purposes? Cryptocurrency, like every other potential use of money, must answer to that prior question, and its claim on the steward’s resources is no stronger than the answer it can give.

For most Christians, the honest answer is that significant participation in cryptocurrency markets is not what their resources are for. Their resources are for the provision of their families, the care of their neighbors, and the support of the work of the kingdom, and what remains beyond these is rarely large enough to justify substantial speculation. For some, a small allocation of genuine surplus to a long-term position may be defensible, conducted in a manner consistent with stewardship rather than with the prevailing culture of the market. For all, the deeper questions of motive, attention, and witness are more important than the operational ones treated in the previous paper.

The Apostle Paul gave Timothy not only the warning against the love of money but the corresponding charge for those whom God has prospered: that they do good, that they be rich in good works, ready to distribute, willing to communicate, laying up in store for themselves a good foundation against the time to come, that they may lay hold on eternal life. This is the goal toward which the use of money, in any of its forms, is to be ordered. Cryptocurrency, like every other financial activity, is to be evaluated by whether it serves that goal or distracts from it.

The reader who has followed this companion essay alongside the four papers preceding it now has, in his hands, both the practical equipment to navigate the cryptocurrency field if he chooses to and the prior framework within which that choice is properly made. The two are complementary. The operational care without the spiritual framework produces a competent participant in a market whose deeper currents may carry him in directions he would not have chosen. The spiritual framework without the operational care produces a participant whose intentions are right but whose execution leaves him exposed to harms he could have avoided. Both are needed, and both have now been offered.

May the reader who participates do so as a faithful steward, holding his portion lightly and his obligations firmly. May the reader who declines do so with peace, knowing that his resources have been directed to ends he can defend before the Master on the day of account. May all readers, whatever their decisions about this particular market, hold their money in the manner the Scriptures describe: as a tool given for purposes larger than the holder, to be used with diligence and surrendered with gladness when the time for an accounting comes. The earth is the Lord’s, and the fulness thereof. The participant who has not forgotten that has the foundation on which every other decision about money, in this market or any other, rightly rests.


Notes

  1. The framing of stewardship offered in the opening section is not novel; it reflects the historic Christian understanding of property as it appears in writers from Augustine to the Reformers to contemporary expositors. Readers interested in a fuller theological treatment will find the references useful, particularly the works by Blomberg and Schneider, which differ in some details but agree on the foundational principle.
  2. The distinction between speculation and gambling is contested among Christian writers, with some treating any market activity that depends on price changes rather than on underlying productivity as functionally gambling, and others drawing the line more permissively. The position taken in this essay — that the distinction is real but admits of degrees — is, in the author’s reading, the position best supported by the parables of the talents and of the pounds, both of which depict commercial activity involving risk in positive terms.
  3. The treatment of family obligation draws on a long Christian tradition of taking seriously the apostolic injunction in 1 Timothy 5:8. The application of that injunction to long-term provision, and not only to immediate need, is the standard reading among writers in the Puritan and Reformed traditions, though it has support across the broader Christian spectrum. The inheritance principle in Proverbs 13:22 is sometimes treated as merely descriptive of ancient practice; the position taken here is that it is prescriptive of a good father’s intention, while leaving the specific forms of provision to prudence.
  4. The discussion of generosity to the neighbor in need is deliberately brief, because the topic deserves its own essay rather than a paragraph. The position taken here — that surplus carries an obligation to those in genuine need — is the position of essentially the entire Christian tradition, though writers disagree on the mechanisms (personal versus institutional, voluntary versus compelled) by which the obligation is best discharged. The references include several works that treat the question more fully.
  5. The diagnostic questions in the section on the heart’s susceptibility are adapted from a pastoral tradition that uses such questions to surface the underlying state of a soul that the surface behavior may not reveal. They are offered as starting points for self-examination rather than as comprehensive tests. A reader who finds them uncomfortable should treat the discomfort as data.
  6. The discussion of witness is informed by the observation, common in pastoral literature, that a Christian’s behavior in financial matters is among the most visible aspects of his life to those who do not share his confession. Coworkers, neighbors, and family members who never hear him preach a sermon will see him invest, save, spend, and give. What they see shapes their understanding of what Christianity actually produces in those who profess it. This is not a reason to perform a particular financial style for the benefit of observers; it is a reason to take seriously the connection between one’s confession and one’s checkbook.
  7. The closing reference to laying up a good foundation against the time to come (1 Timothy 6:19) is sometimes read as a contradiction of the Lord’s instruction not to lay up treasures on earth (Matthew 6:19). The standard reconciliation, which this essay assumes, is that the two passages refer to different objects: earthly treasure as an end in itself versus earthly resources directed toward eternal ends. The Christian is to do the latter while avoiding the former, and the distinction between them is largely a matter of the heart’s orientation rather than of the visible behavior.
  8. A reader who finds this essay’s perspective unfamiliar may wish to read it alongside one of the more accessible introductions to Christian stewardship listed in the references — Whelchel and Alcorn are good starting points for non-specialist readers, while the works by Blomberg and Wheeler are more substantial treatments for those wanting fuller engagement. The position of this essay is broadly consistent with the mainstream of historic Christian teaching on the subject, though specific applications vary among writers.

References

Alcorn, R. (2003). Money, possessions, and eternity (Rev. ed.). Tyndale House.

Beed, C., & Beed, C. (2006). Alternatives to economics: Christian socio-economic perspectives. University Press of America.

Blomberg, C. L. (1999). Neither poverty nor riches: A biblical theology of possessions. InterVarsity Press.

Blomberg, C. L. (2013). Christians in an age of wealth: A biblical theology of stewardship. Zondervan.

Burkett, L. (1998). Business by the book: The complete guide of biblical principles for the workplace (Rev. ed.). Thomas Nelson.

Chewning, R. C., Eby, J. W., & Roels, S. J. (1990). Business through the eyes of faith. HarperOne.

Foster, R. J. (1985). Money, sex and power: The challenge of the disciplined life. Harper & Row.

Getz, G. A. (2004). Rich in every way: Everything God says about money and possessions. Howard Books.

Gonzalez, J. L. (1990). Faith and wealth: A history of early Christian ideas on the origin, significance, and use of money. Harper & Row.

Hill, A. (2018). Just business: Christian ethics for the marketplace (3rd ed.). InterVarsity Press.

Keller, T. (2010). Counterfeit gods: The empty promises of money, sex, and power, and the only hope that matters. Dutton.

Kraybill, D. B. (2018). The upside-down kingdom (Anniversary ed.). Herald Press.

Pope, S. J. (Ed.). (2010). The Hope of liberation in world religions. Baylor University Press.

Ronsvalle, J., & Ronsvalle, S. (2017). The state of church giving through 2015 (27th ed.). Empty Tomb.

Rupprecht, A. A. (1990). Stewardship in the Pauline epistles. Bibliotheca Sacra, 147(587), 322–334.

Schneider, J. R. (2002). The good of affluence: Seeking God in a culture of wealth. Eerdmans.

Sider, R. J. (2015). Rich Christians in an age of hunger: Moving from affluence to generosity (6th ed.). Thomas Nelson.

Stott, J. R. W. (2006). Issues facing Christians today (4th ed.). Zondervan.

Wesley, J. (1986). The use of money. In A. C. Outler (Ed.), The works of John Wesley: Sermons II (Vol. 2, pp. 263–280). Abingdon Press. (Original work published 1760)

Wheeler, S. E. (1995). Wealth as peril and obligation: The New Testament on possessions. Eerdmans.

Whelchel, H. (2012). How then should we work? Rediscovering the biblical doctrine of work. WestBow Press.

Witherington, B., III. (2010). Jesus and money: A guide for times of financial crisis. Brazos Press.

Wright, C. J. H. (2004). Old Testament ethics for the people of God. InterVarsity Press.


Posted in Bible, Christianity, Musings | Tagged , , , | Leave a comment

Participating Safely In Cryptocurrency If You Choose To


Introduction

The three preceding papers in this series have laid out, in turn, what cryptocurrency is and what it promises, why the field attracts unusual concentrations of fraud, and what specific forms that fraud takes. A reader who has followed the series this far has all the information needed to make an informed decision about whether to participate at all, and has the recognition tools needed to avoid most of the schemes circulating in the field.

This final paper is for the reader who has weighed the promise and the peril and concluded that some measured involvement is right for him. It is a practical handbook. It does not assume that participation is wise, and it begins by inviting the reader to reconsider the question. It then walks through the operational details — position sizing, choosing where to transact, custody, due diligence, security, taxes, recovery posture, and the often-overlooked discipline of knowing when to step away — that distinguish prudent participation from reckless involvement.

A word on tone before beginning. The voice that follows is the voice of a calm older friend who has watched a great many people make a great many mistakes in this field and would like to spare the reader from repeating them. The recommendations are not infallible, and circumstances vary. But the basic shape of prudent participation is reasonably stable across cases, and a reader who follows it will avoid most of the disasters that befall those who do not.

The Threshold Question First

Before any operational question, the prior question: should you participate at all?

The honest answer is that for many people, the right number is zero. There is no moral or financial obligation to hold cryptocurrency. A person who declines to participate is not being left behind by history; he is exercising the same prudence that has served careful savers across centuries. The success stories that circulate in the field are real but unrepresentative, and the losses that do not circulate are far more common than the gains that do.

Several categories of people should be especially slow to enter, and in most cases should not enter at all. A person who is in debt — particularly consumer debt with high interest rates — has a guaranteed return available by paying down that debt, of a magnitude that no speculative investment can reliably match. A person whose income does not yet cover his expenses, or whose emergency reserves are inadequate, has more pressing uses for any spare funds. A person who would be materially harmed by losing the amount under consideration should not put that amount at risk, since loss is among the more likely outcomes in this field.

Two categories deserve particular mention. A person who is being pressured by family, friends, or members of his community to “get in” before some perceived opportunity passes is being recruited rather than informed, and should treat the pressure itself as a reason for caution rather than as evidence of the opportunity. A person whose interest in cryptocurrency has been kindled by a specific individual — a romantic interest, an online acquaintance, a confident friend with impressive recent gains — should examine that influence honestly before acting on it.

The principle underlying all of these considerations is the old one: the borrower is servant to the lender, and the man who is hasty to be rich shall not be innocent. Speculation conducted from a position of financial vulnerability is not investment but gambling, and gambling with money one cannot afford to lose is, regardless of the instrument, a form of self-harm.

If the threshold question is answered in the affirmative, the next question is how much.

Position Sizing

The single most important operational decision in cryptocurrency participation is not which assets to buy, or when, or through what platform, but how much of one’s total wealth to expose to the field at all. Most of the people who have been seriously harmed by this market have been harmed not because they made bad selections within their cryptocurrency allocation, but because their cryptocurrency allocation was too large relative to their overall financial picture.

The traditional rule of thumb for highly speculative assets is that one should not commit more than one can afford to lose without changing one’s life. This rule is correct as far as it goes but is often interpreted too generously. A person who tells himself he can “afford to lose” ten percent of his net worth is usually telling himself a story; the actual experience of losing ten percent of one’s net worth is uncomfortable in ways one tends to underestimate in advance.

A more conservative formulation: commit only an amount that, if it went to zero tomorrow and you discovered the loss in the morning, would prompt mild regret rather than meaningful distress. The test is emotional as well as financial, because the emotional response to loss is what produces the panic decisions that turn moderate losses into severe ones. A position sized correctly produces no panic when it declines; a position sized incorrectly produces panic, and the panic produces the worst outcomes.

For most people without specialized expertise or unusual risk tolerance, this works out to a percentage of total investable assets in the low single digits. A figure of one to five percent is reasonable for most participants who want exposure to the asset class. Higher figures are appropriate only for those whose overall financial position is robust enough to absorb a total loss without consequence, or for those whose work in the field gives them an information advantage that ordinary participants do not have.

A note on what the position is for. Cryptocurrency, in a prudent portfolio, is a small allocation to an uncorrelated and speculative asset, not a path to wealth. The expectation should be that the position either produces a modest gain over a long period, declines to a small fraction of its value, or goes to zero entirely. Each of these outcomes should be financially survivable. None of them should be the basis for plans about one’s life.

Choosing Where to Transact

If you have decided how much to commit, the next question is where to do business. The choice of platform is one of the most consequential decisions in the field, because a platform’s failure or fraud can result in total loss regardless of how the underlying assets perform.

The criteria for evaluating an exchange or custodian, in rough order of importance:

Regulatory standing. The platform should be licensed and supervised by financial regulators in the jurisdictions where it operates, including the jurisdiction where you reside. The regulatory framework is imperfect, but it provides a baseline of disclosure, reserve requirements, and recourse that unregulated platforms do not. Platforms that advertise their lack of regulatory entanglements are advertising a feature you do not want.

Operating history. A platform that has operated for many years without major incidents has demonstrated, at minimum, the absence of an immediate exit-scam plan. This is a low bar but a real one. New platforms, even well-funded ones with credible-sounding teams, should be approached with caution until they have established a track record.

Reserve attestations or audits. Several major exchanges now publish periodic attestations from independent accounting firms confirming that customer balances are backed by actual assets in the platform’s custody. These attestations are not as strong as full audits, but they are stronger than nothing. A platform that does not publish any such evidence is asking you to take its word on a question for which the relevant evidence is straightforward to provide.

Insurance. Some platforms carry insurance against certain types of loss — typically theft from the platform’s own systems, not loss caused by user error or by user-authorized fraudulent transactions. Insurance coverage is worth understanding in detail, but its presence is generally a sign of greater institutional seriousness.

Jurisdiction. A platform incorporated in a jurisdiction with strong financial supervision and reliable courts is preferable to one incorporated in a jurisdiction known primarily for its accommodation of regulatory arbitrage. The convenience of using an “offshore” platform is rarely worth the loss of recourse when something goes wrong.

A separate principle, often summarized as “not your keys, not your coins,” underlies all of these considerations. Funds held by an exchange on your behalf are functionally a promise by that exchange to deliver the funds when you ask for them. The promise is only as good as the exchange’s solvency and honesty. For any holding you intend to keep for a meaningful period — anything beyond active trading — the prudent practice is to move the funds into custody you control yourself. The next section addresses what that means in practice.

Custody Basics

Self-custody of cryptocurrency means holding the private keys to your funds directly, rather than having a third party hold them on your behalf. It is one of the most distinctive features of the technology and one of its most unforgiving disciplines.

The basic vocabulary:

A wallet is a piece of software or hardware that manages your private keys and allows you to authorize transactions. The wallet does not “contain” your cryptocurrency in any literal sense; the cryptocurrency exists on the network. What the wallet contains is the key that proves you have the right to move it.

A hot wallet is a wallet running on a device that is connected to the internet — typically a phone or computer. Hot wallets are convenient for frequent use but more exposed to attack.

A cold wallet is a wallet whose private keys have never touched an internet-connected device. The most common form is a hardware wallet — a small dedicated device that generates and stores keys offline and signs transactions internally, so that the keys themselves never leave the device.

A seed phrase (sometimes called a recovery phrase) is a sequence of words, typically twelve or twenty-four, from which all of the wallet’s private keys can be derived. Anyone who has the seed phrase has full and permanent control of the wallet and everything in it.

The discipline of self-custody comes down to a small set of inviolable practices.

For any holding above a trivial amount, use a hardware wallet from a reputable manufacturer. The hardware-wallet market has consolidated around a handful of established firms; using one of them is far safer than using an obscure alternative, regardless of any features the obscure alternative advertises.

When the hardware wallet is initialized, it will generate a seed phrase and display it to you. Write the seed phrase down on paper, by hand, and store the paper somewhere safe. Do not photograph the seed phrase. Do not type it into any computer. Do not save it in any cloud service, password manager, email draft, or note-taking application. The reason for this absolute prohibition is that any digital copy of the seed phrase is a target for theft, and any theft of the seed phrase is the end of your holdings. The handful of seconds you save by typing the words into a password manager is not worth the lifetime risk it creates.

Consider storing a second copy of the seed phrase in a separate physical location, in case the first is lost to fire or flood. Some users engrave seed phrases on small steel plates for fire resistance; this is a reasonable practice for substantial holdings. Some users split the seed phrase across multiple locations using a formal technique called Shamir’s Secret Sharing; this is for advanced users and adds complexity that introduces its own risks.

Never share your seed phrase with anyone, for any reason. There is no scenario in which a legitimate party needs your seed phrase. Customer support does not need it. Wallet recovery does not require providing it to a third party. Tax authorities do not need it. The only legitimate use of the seed phrase is to restore your own wallet on a new device when the original is lost or damaged, and even then the words are typed into the wallet software, not provided to any service.

A separate discipline: when you receive a new hardware wallet, initialize it yourself. Do not use a device that arrives pre-initialized with a seed phrase, regardless of how plausible the explanation. Several documented cases have involved attackers selling tampered devices through unofficial channels.

A Due-Diligence Framework

For any particular cryptocurrency project that you are considering — whether a major established asset or a newer offering — a basic framework of questions will eliminate most of the unsound ones quickly.

Who built it? The team behind a project should be identifiable, with credentials and history that can be confirmed through independent sources. Anonymous teams are not automatically fraudulent — Bitcoin’s creator remains anonymous to this day — but anonymity raises the burden of evidence elsewhere. For a smaller or newer project, an anonymous team is a substantial warning sign.

Where does the money come from? Every economic activity has to produce value from somewhere. A project that promises returns must have a credible answer to the question of where those returns are generated. “Trading,” “arbitrage,” “DeFi yields,” and similar one-word answers are not credible answers; they are placeholders that should prompt further questions. The further questions should produce specific descriptions of specific activities that can, in principle, be verified.

Who holds the supply? The distribution of a token’s supply across holders tells you a great deal about the project’s risk profile. If a small number of addresses hold most of the supply, those holders can move the price substantially by selling, and have strong incentives to do so once retail buyers have arrived. Blockchain explorers allow anyone to examine these distributions for any major token; the examination is worth the few minutes it takes.

What does the token actually do? Many tokens have no economic function beyond being bought and sold. This is not necessarily disqualifying — gold has no economic function beyond being bought and sold either, and has retained value for several thousand years — but it does mean that the token’s price depends entirely on the willingness of future buyers to pay more for it than current buyers paid. Tokens that claim economic functions should be examined carefully to confirm that those functions actually exist and produce real value.

Can the answers be verified independently? A project that requires you to take the founders’ word for its claims is asking for trust that is, in this field, repeatedly abused. A project whose claims can be confirmed through independent sources — court filings, blockchain data, audited financial statements, working products with actual users — has cleared a meaningful bar that most fraudulent projects cannot.

A useful complement to these questions is the test of explanation. If you cannot explain to a thoughtful friend, in clear language, what a project does, how it produces value, and what could go wrong with it, you do not understand it well enough to invest in it. The technical complexity of the field is sometimes used as cover for the absence of a coherent underlying idea; insisting on a clear explanation is one of the most effective defenses available.

Operational Security

Beyond the discipline of seed-phrase storage, prudent participation requires a set of operational practices that protect the accounts and devices through which you interact with the field.

Unique passwords for every account. Use a reputable password manager to generate and store unique, long passwords for every cryptocurrency-related account. Password reuse is the single most common cause of account compromise across the internet, and the consequences are more severe in cryptocurrency than in most other contexts because there is no chargeback mechanism.

Hardware-based two-factor authentication. For any account holding meaningful value, use a physical security key (such as a YubiKey) as the second factor, rather than text messages or authenticator applications. Text-message-based authentication is vulnerable to SIM-swap attacks; authenticator applications are better but still software-based and can be compromised if the underlying device is. Hardware keys are not invulnerable but are far harder to compromise remotely.

Dedicated browser profile or device. Consider conducting cryptocurrency-related activity through a separate browser profile, or even a separate device, that is used for nothing else. This limits the exposure of your cryptocurrency activity to compromises elsewhere on your primary system. For substantial holdings, a dedicated computer that is used only for managing the holdings is reasonable, not paranoid.

Email hygiene. The email address associated with your cryptocurrency accounts is itself a target. Use a dedicated email address for these accounts, not your primary personal address. Secure that email address with the same hardware-based two-factor authentication you use for the cryptocurrency accounts themselves. If the email account is compromised, every account that uses it for password recovery is potentially compromised as well.

Beware of installed software. Browser extensions, wallet plugins, and desktop applications related to cryptocurrency are vectors that have been used in many documented thefts. Install only software from official sources, confirmed through multiple independent channels. Remove any cryptocurrency-related software you are not actively using.

These practices are not optional refinements; they are the baseline. A user who skips them is relying on luck, and luck in this field tends to expire eventually.

Tax and Recordkeeping

Cryptocurrency transactions are generally taxable events in most jurisdictions, including in ways that surprise participants who have not thought about it.

In the United States, the Internal Revenue Service treats cryptocurrency as property rather than currency for tax purposes. This means that every disposition of cryptocurrency — including selling for dollars, trading one cryptocurrency for another, and using cryptocurrency to purchase goods or services — is a taxable event that may produce a capital gain or loss. The gain or loss is calculated as the difference between the disposition price and the cost basis (what you paid for the asset originally).

Several implications follow that participants frequently miss. Trading one cryptocurrency for another is taxable in the same way as selling for dollars; the absence of any dollars in the transaction does not exempt it. Receiving cryptocurrency as payment for goods, services, or work is ordinary income at the fair market value on the date received. Receiving cryptocurrency through certain network mechanisms — staking rewards, airdrops, hard forks — is generally also taxable income at the time of receipt, although the specific treatment of some of these has been the subject of evolving guidance.

The practical implication is that recordkeeping needs to begin on the first transaction. For each acquisition, record the date, the asset, the quantity, the cost in your home currency, and the source. For each disposition, record the date, the asset, the quantity, the proceeds in your home currency, and the destination. Several commercial services can help reconstruct records from exchange and wallet data, but reconstruction is invariably harder and less reliable than recording the data as you go.

A separate consideration: the tax treatment of cryptocurrency continues to evolve, and rules vary substantially across jurisdictions. For any participant with holdings above a trivial amount, consulting a tax professional familiar with cryptocurrency is a sound investment. The cost is modest relative to the cost of mistakes.

Recovery Posture

Despite every precaution, things sometimes go wrong. A device is lost. An account is compromised. A transaction is sent to the wrong address. The question is what to do when something goes wrong, and the candid answer is that recovery is harder in cryptocurrency than in most other financial contexts.

If you suspect that an account has been compromised — unexpected emails about logins, transactions you did not authorize, password reset notifications you did not request — act immediately. Move any remaining funds to a wallet you control, change passwords on all related accounts, and contact the platform’s official support through its official website. Document everything: timestamps, transaction hashes, screenshots, communications.

If funds have been stolen, report the theft to the appropriate authorities. In the United States, this means the Internet Crime Complaint Center operated by the FBI, the Federal Trade Commission, and the platform involved. State financial regulators also accept these reports. Recovery is uncommon, but reports help build the broader patterns that occasionally lead to enforcement actions, and aggregated data informs the warnings that protect future victims.

If you have been the victim of an investment fraud or a relationship-based scheme such as the kind described in the previous paper, additional resources exist. Victim-support organizations can help with the practical and emotional aftermath, and certain civil-recovery firms specialize in tracing stolen funds through blockchain analysis. Approach the latter with caution: a meaningful number of “recovery services” that contact victims of theft are themselves fraudulent, offering to recover the stolen funds for an additional fee that itself disappears. Legitimate recovery work is expensive, slow, and rarely fruitful in absolute terms.

The hardest part of the recovery posture, and the one that prevents the largest secondary losses, is acceptance. Money stolen from a cryptocurrency wallet is, in the great majority of cases, gone permanently. Accepting this quickly prevents the additional losses that come from desperate attempts to recover what cannot be recovered. The Scriptures observe that wealth gotten by vanity shall be diminished, and the corresponding wisdom for wealth lost to fraud is that the dignity of the victim is preserved by mourning the loss honestly rather than by chasing it further into the trap.

Knowing When to Step Away

The final discipline, and the one most often neglected, is the discipline of recognizing when participation has stopped serving you and starting walking away.

The warning signs are usually internal rather than external. You find yourself checking prices more often than you intended. The state of your holdings affects your mood in ways that seem disproportionate to the amounts involved. You think about cryptocurrency at times when you would rather be thinking about other things. You make decisions about your participation in ways that you would not endorse if you stopped to examine them.

These are signs that the position has grown larger than it should be, or that the involvement has crossed from financial activity into something closer to compulsion. Both are good reasons to reduce exposure substantially, regardless of what the price chart is doing. A holding that does not interfere with your peace is a tolerable holding; one that does is not, no matter how well it may be performing in any given week.

A related warning sign concerns the people around you. If your involvement in cryptocurrency is creating tension in your marriage, distance from your family, or conflict with people whose judgment you respect, the position itself may not be the problem, but the position is interacting with something that is. Honest conversation with those people, before further action, is far more valuable than any analysis of the market.

The deepest version of this warning sign is spiritual. If the love of money has begun to crowd out other loves, if anxiety about gains and losses has displaced the peace that should belong to a believer, if cryptocurrency activity has begun to compete with prayer, family, work, or worship, the right response is not to fine-tune the position but to step back from it. The Apostle Paul warned Timothy that they that will be rich fall into temptation and a snare, and into many foolish and hurtful lusts, which drown men in destruction and perdition. The warning is general, but it applies with particular force to a market designed to exploit exactly these vulnerabilities.

The man who walks away from a position because it is harming his soul has not lost; he has won the only victory that matters. The man who clings to a position while it harms his soul has not gained, regardless of what the price chart shows.

Conclusion

The four papers in this series have attempted, together, to give an honest account of cryptocurrency: where it came from and what it promises, why it attracts so much fraud, what specific forms that fraud takes, and how a person who chooses to participate can do so with reasonable care.

The picture that emerges is neither the triumphant one painted by the field’s promoters nor the dismissive one favored by its critics. Cryptocurrency is a genuine technological innovation that addresses real problems imperfectly, embedded in a market that has been hospitable to unusual concentrations of fraud, surrounded by a culture that frequently works against the interests of the ordinary participant. Prudent involvement is possible, but it requires discipline, modest expectations, and the willingness to walk away when participation stops being prudent.

The reader who has followed this series to the end has done more diligence than the great majority of people who enter the field. That diligence will not guarantee good outcomes — nothing can — but it shifts the odds significantly. May the reader who chooses to participate do so wisely, and may the reader who chooses not to participate hold his decision with confidence rather than regret. Both choices, made thoughtfully, are honorable.

A final thought, in keeping with the perspective that has shaped this series throughout. Whatever decisions one makes about cryptocurrency, the principles that should govern those decisions are not new. The wisdom of the Proverbs about diligence and the warnings of the Epistles about the love of money were given to people who had never imagined a distributed ledger, but they speak to the situation of someone considering one with full and unchanged force. The instruments change; the human heart does not. A person who attends to the latter while reasoning carefully about the former will find his way through this field, as through any other, with the help of the same Lord who has guided his people through every previous one.


Notes

  1. The recommendation against committing more than a small single-digit percentage of investable assets is conservative relative to some advice in the field but consistent with how speculative and highly volatile assets are generally treated in portfolio theory. Readers whose circumstances differ — for example, those whose total assets are small enough that single-digit percentages would be trivial in absolute terms, or those with specialized expertise — may reasonably arrive at different figures. The principle behind the recommendation, that the position should be financially and emotionally survivable in the event of total loss, is the part that travels.
  2. The injunction against storing seed phrases in cloud services, password managers, or any digital form is sometimes resisted by readers who find the discipline inconvenient. The inconvenience is real, but the alternative is not “slightly elevated risk” but “the most common single point of failure that leads to total loss of holdings.” Several major thefts of substantial value have been traced directly to seed phrases stored in cloud-synced notes applications. The convenience and the catastrophe are linked.
  3. The point about distinguishing legitimate from fraudulent “recovery services” deserves emphasis. Victims of cryptocurrency theft are themselves targets for a secondary class of fraud in which a “recovery service” claims to be able to trace and retrieve stolen funds for a fee. A high percentage of these services are themselves fraudulent. Genuine blockchain-analysis work exists and is performed by reputable firms, but those firms generally work with law-enforcement agencies and large institutions rather than soliciting individual victims through unsolicited contact. An offer of recovery services that arrives through email, social media, or a forum post in the aftermath of a theft should be treated as itself almost certainly fraudulent.
  4. The remarks on tax treatment focus on United States rules because they are the framework most directly familiar to the largest portion of likely readers. Readers in other jurisdictions should not assume that the treatment is identical in their country; many jurisdictions have substantially different rules, some more favorable and some less so. The general point that recordkeeping must begin on the first transaction applies in essentially all jurisdictions.
  5. The closing discussion of knowing when to step away may strike some readers as out of place in a paper otherwise concerned with operational mechanics. It is included because the most consistent finding in the literature on financial harms — across cryptocurrency, gambling, day-trading, and other speculative pursuits — is that the largest losses are concentrated in participants for whom the activity has crossed from financial to compulsive. No operational discipline can substitute for the self-awareness to recognize that crossing when it happens. The signs described in that section are drawn from clinical literature on problem gambling and behavioral addiction, adapted to the cryptocurrency context.
  6. The scriptural references throughout the paper — to the borrower being servant to the lender (Proverbs 22:7), to the hasty being not innocent (Proverbs 28:20), to wealth gotten by vanity (Proverbs 13:11), and to the love of money (1 Timothy 6:9–10) — are not ornamental. The wisdom literature of the Scriptures contains some of the most concentrated practical instruction on money ever written, and its applicability to speculative markets is direct. A reader who finds these references useful may also find profit in a sustained reading of Proverbs alongside any further engagement with the field.
  7. The recommendation to consult a tax professional, made briefly in the paper, is worth amplifying. The tax rules surrounding cryptocurrency are unsettled in several respects, and even professionals must work harder than usual to stay current. A modest professional fee paid annually is among the highest-value expenditures available to a participant with non-trivial holdings, both for the direct return in tax efficiency and for the considerable reduction in anxiety that comes from knowing that one’s records are in order.

References

Antonopoulos, A. M. (2017). Mastering Bitcoin: Programming the open blockchain (2nd ed.). O’Reilly Media.

Antonopoulos, A. M. (2023). Mastering the Lightning Network. O’Reilly Media.

Bartos, J. (2024). Hardware wallet security: A comparative analysis of supply-chain and physical attack vectors. Journal of Cryptocurrency Research and Practice, 9(2), 145–172.

Cong, L. W., Landsman, W., Maydew, E., & Rabetti, D. (2023). Tax-loss harvesting with cryptocurrencies. Journal of Accounting and Economics, 76(2–3), Article 101607. https://doi.org/10.1016/j.jacceco.2023.101607

Conlon, T., Corbet, S., & McGee, R. J. (2020). Are cryptocurrencies a safe haven for equity markets? An international perspective from the COVID-19 pandemic. Research in International Business and Finance, 54, Article 101248. https://doi.org/10.1016/j.ribaf.2020.101248

Delgado-Segura, S., Pérez-Solà, C., Navarro-Arribas, G., & Herrera-Joancomartí, J. (2019). Analysis of the Bitcoin UTXO set. In A. Zohar et al. (Eds.), Financial cryptography and data security (pp. 78–91). Springer. https://doi.org/10.1007/978-3-662-58820-8_6

Eisenbach, T. M., Kovner, A., & Lee, M. J. (2022). Cyber risk and the U.S. financial system: A pre-mortem analysis. Journal of Financial Economics, 145(3), 802–826. https://doi.org/10.1016/j.jfineco.2021.10.007

Federal Trade Commission. (2024). Consumer Sentinel Network data book 2023. Federal Trade Commission. https://www.ftc.gov/reports

Financial Crimes Enforcement Network. (2023). Guidance on virtual currencies. U.S. Department of the Treasury. https://www.fincen.gov/

Internal Revenue Service. (2024). Frequently asked questions on virtual currency transactions. https://www.irs.gov/individuals/international-taxpayers/frequently-asked-questions-on-virtual-currency-transactions

International Organization of Securities Commissions. (2023). Policy recommendations for crypto and digital asset markets: Final report. IOSCO. https://www.iosco.org/

Liebau, D., & Schueffel, P. (2019). Cryptocurrencies and ICOs: Are they scams? An empirical study. Journal of the British Blockchain Association, 2(1), 1–7. https://doi.org/10.31585/jbba-2-1-(5)2019

National Institute of Standards and Technology. (2022). Blockchain technology overview (NISTIR 8202). U.S. Department of Commerce. https://doi.org/10.6028/NIST.IR.8202

North American Securities Administrators Association. (2023). Informed investor advisory: Cryptocurrencies. NASAA. https://www.nasaa.org/

Productivity Commission. (2022). Problem gambling and consumer harms: A review of the evidence. Australian Government. https://www.pc.gov.au/

Schär, F. (2021). Decentralized finance: On blockchain- and smart contract-based financial markets. Federal Reserve Bank of St. Louis Review, 103(2), 153–174. https://doi.org/10.20955/r.103.153-74

Securities and Exchange Commission, Office of Investor Education and Advocacy. (2023). Investor bulletin: Digital asset and “crypto” investment scams. https://www.sec.gov/investor

Sklaroff, J. M. (2017). Smart contracts and the cost of inflexibility. University of Pennsylvania Law Review, 166(1), 263–303.

Williams, R. J., West, B. L., & Simpson, R. I. (2012). Prevention of problem gambling: A comprehensive review of the evidence and identified best practices. Ontario Problem Gambling Research Centre. https://www.gamblingresearch.org/

Yermack, D. (2019). Blockchain technology’s potential in the financial system. In Proceedings of the 2019 Financial Market’s Quality Conference. https://doi.org/10.2139/ssrn.3309602


Posted in Musings | Tagged , , , , | Leave a comment

A Field Guide to Crypto Scams


Introduction

The previous paper in this series argued that cryptocurrency attracts unusual concentrations of fraud because of a combination of structural features — irreversibility, pseudonymity, global reach, near-zero token creation costs — and cultural ones — tribal communities, influencer-driven information, narratives of overnight wealth, and the relentless pull of fear of missing out. That argument was diagnostic. The present paper is practical.

Its aim is to equip the reader to recognize the shape of a scam even when the particular story is unfamiliar. Scams in this space mutate constantly in surface detail, but their underlying structures are remarkably stable. A person who understands the categories can usually identify a new variant within a few minutes of encountering it, even when the specific tokens, platforms, and personalities are ones he has never heard of.

The taxonomy that follows is organized by the level at which the fraud operates: at the level of an individual token, at the level of a platform that holds or trades tokens, at the level of direct social manipulation of users, at the level of investment schemes that merely use crypto as a wrapper, at the level of affinity relationships that exploit existing trust, and at the level of impersonation. A final section addresses the red flags that cut across all categories. Throughout, the discussion focuses on patterns rather than on specific incidents, and avoids identifying particular victims in identifying ways.

A note before beginning. The descriptions that follow are meant to inform readers so that they can protect themselves; they are not meant to serve as instructions for anyone tempted to imitate the schemes. Each of the categories described is illegal in most jurisdictions, and the legal consequences for those caught operating them have grown substantially more severe in recent years.

Token-Level Frauds

These are schemes in which the cryptocurrency token itself is the instrument of fraud. The token exists to extract money from buyers, and the mechanism of extraction is built into the token’s design.

Rug Pulls

A rug pull is the simplest and most common form of token-level fraud. The creators of a new token build enthusiasm through marketing, attract buyers who exchange real money for the new token, and then, once a sufficient amount has been gathered, sell their own holdings and disappear. The token’s price collapses to zero or near-zero, and the buyers are left holding worthless assets. The funds, by then, have typically been moved through a series of intermediary addresses and converted into other assets, making recovery nearly impossible.

The distinguishing feature of a rug pull, as opposed to a project that simply fails, is intent. A failed project loses its buyers’ money because the team could not make it work; a rug pull loses it because that was the plan from the start. Distinguishing the two from outside can be difficult, which is part of what makes the scheme effective.

Honeypot Contracts

A honeypot is a more technically sophisticated variant. The token’s underlying code is written so that buyers can purchase the token but cannot sell it. The seller’s wallet, or a small set of insider wallets, retain the ability to sell, but everyone else is trapped. The price chart of a honeypot often looks attractive — steadily rising, with no sellers — because the only people who can sell are the perpetrators, and they wait until the trap is full before doing so.

The mechanism is invisible without inspecting the token’s code, which most retail buyers neither do nor know how to do. Several services now exist that attempt to detect honeypots automatically, but the perpetrators adapt their techniques to evade detection, and the arms race continues.

Coordinated Pump-and-Dump Groups

In a pump-and-dump, a group of organizers acquires a large position in a low-volume token, coordinates a sudden burst of promotional activity to attract retail buyers, and sells into the resulting price rise. The retail buyers, arriving late, are left holding the token as the price collapses.

These groups operate openly on certain messaging platforms, with members paying for early access to “calls” — announcements of which token will be promoted next. The earliest tier of members buys first, the middle tier second, the lowest tier third, and the public last. By design, only the earliest tier reliably profits; everyone else is, in effect, the exit liquidity for those above them in the structure. Members who lose money rarely complain, both because the loss confirms their own low position in the hierarchy and because they hope to recoup on the next call.

Platform-Level Frauds

These are schemes in which the fraud operates at the level of a platform — an exchange, a lender, a custodian — rather than at the level of any particular token. The platform itself is the instrument.

Fake Exchanges

A fake exchange is a website or application that presents itself as a venue for buying, selling, and storing cryptocurrency, but which in fact has no real trading engine and no real custody. Users deposit funds, see fictional balances and trading activity on the interface, and may even be allowed to withdraw small amounts initially to build trust. Larger withdrawals are blocked under various pretexts — “verification fees,” “tax withholdings,” “anti-money-laundering reviews” — that themselves often require further deposits. When the operators have extracted as much as they can, the website disappears.

Distinguishing a fake exchange from a real one is harder than it should be. Legitimate exchanges and fake ones can look nearly identical from the outside, and the warning signs — implausible returns, unsolicited recruitment, pressure to deposit quickly — overlap with patterns sometimes seen at legitimate platforms during promotional periods.

Long-Running Exit Scams

A more patient variant is the platform that operates legitimately, or nearly so, for an extended period — months or years — and then exits with customer funds. During the operational period, the platform builds a reputation, attracts deposits, processes withdrawals, and earns fees. When the operators decide the time has come, customer access to withdrawals is suspended, communication ceases, and the funds are moved through obfuscating channels.

Several of the most consequential exchange collapses of the past decade have followed this pattern, though in most cases the operators argue afterward that the failure was the result of mismanagement or external shocks rather than fraud. The legal proceedings that follow typically take years to resolve and rarely return more than a small fraction of customer funds.

“Lending” and “Yield” Platforms That Are Actually Ponzi Schemes

A particularly destructive variant of platform-level fraud is the lending or yield platform that promises high returns on deposited cryptocurrency. The promised yield is justified in marketing materials by various claimed activities — lending to institutional traders, providing liquidity to decentralized exchanges, sophisticated arbitrage strategies — that the platform’s operators are said to perform with customer funds.

In a meaningful number of cases over the past several years, the actual source of the promised yield has been the deposits of new customers, paid out to earlier customers as “interest.” This is the classic Ponzi structure described by Charles Ponzi a century ago, dressed in modern vocabulary. The scheme runs as long as new deposits exceed withdrawals, and collapses as soon as withdrawals catch up. The collapse, when it comes, is sudden and total.

The warning sign for this category is consistent above all others: any return that is substantially higher than what legitimate lending markets pay, and that is presented as low-risk or risk-free, is almost certainly being paid from somewhere other than the activity claimed.

Social Engineering

These are schemes in which the technology and platforms are largely incidental, and the fraud operates through direct manipulation of the user. The target is not the system but the person.

Phishing

Phishing in cryptocurrency follows the familiar pattern from the wider internet, with a few twists specific to the field. The attacker constructs a website, email, or pop-up that imitates a legitimate service — a popular wallet, a major exchange, a token project — and induces the user to enter credentials or, more damagingly, the user’s private key or seed phrase.

A seed phrase is a sequence of words from which all of a wallet’s private keys can be derived. Anyone who obtains the seed phrase has full control of the wallet and all of its funds, permanently. Legitimate services never ask for a user’s seed phrase, for any reason. The single most important defensive habit in this entire field is the absolute refusal to enter a seed phrase anywhere other than the wallet software for which it was originally generated, and even there only when restoring access — never in response to a prompt, a support request, or a “verification” of any kind.

Fake Customer Support

A related pattern: the user posts a question or complaint on a public forum about a wallet, exchange, or project. Within minutes, a private message arrives from an account identifying itself as customer support, offering to help. The “support agent” walks the user through a process that requires entering the seed phrase, connecting the wallet to a fraudulent site, or transferring funds to an address for “verification.” The funds, of course, are gone.

Legitimate support staff never reach out unsolicited through private messages on public platforms, and never request the information these impersonators request. The pattern is reliable enough to function as a near-perfect filter: an unsolicited private message offering support after a public post is, in this field, almost always fraudulent.

SIM Swap Attacks

A SIM swap is a technique in which the attacker convinces a mobile carrier to transfer the victim’s phone number to a SIM card under the attacker’s control. With the phone number captured, the attacker can intercept text messages used for two-factor authentication, reset account passwords, and gain access to accounts that the user believed were protected.

This attack has been used to compromise both crypto exchange accounts and the personal wallets of public figures known to hold substantial cryptocurrency. The defense is to avoid using text-message-based two-factor authentication for accounts that hold significant value, and to use hardware-based authentication keys or authenticator applications instead. The next paper in this series will treat the operational security measures in more detail.

Investment Fraud in Crypto Clothing

These are schemes that would be recognizable as ordinary investment fraud in any market, but that use cryptocurrency as the medium because it makes the fraud easier to execute and harder to prosecute.

High-Yield Investment Programs

The high-yield investment program — promising returns of one percent per day, or five percent per week, or some similar figure that no legitimate investment can sustain — is a pattern older than the internet. In cryptocurrency form, it usually involves a polished website, a story about proprietary trading algorithms or arbitrage opportunities, and an interface that displays accumulating returns in real time. Withdrawals work for a time. Then they do not.

The mathematics of these programs are unforgiving. A return of one percent per day compounds to more than three thousand seven hundred percent per year. No legitimate investment activity in the world generates such returns sustainably. Any program advertising them is either a Ponzi scheme that will collapse or an outright fraud that has no investment activity behind it at all. The two outcomes are equivalent for the buyer.

Fake Managed Accounts

A variant: the buyer is recruited by someone presenting himself as a successful trader who manages money for clients. The buyer is invited to open an account on a platform — sometimes a fake exchange, sometimes a legitimate one — and to grant the “trader” access. The buyer sees impressive gains accumulate in the account over a period of weeks. When the buyer attempts to withdraw, fees and taxes are demanded, deposits cannot be matched, and the gains turn out to have been entirely fictional.

Signal Groups and Subscription Trading Services

A milder but still predatory variant: services that charge a monthly fee for trading “signals” — recommendations on which tokens to buy and sell. The signals are usually generated by the operators’ own trading positions, so that subscribers’ purchases drive up the prices of tokens the operators already hold and are about to sell. The subscribers pay both the subscription fee and the cost of being on the wrong side of the operators’ trades.

Affinity Fraud

These are schemes that exploit existing relationships of trust — family, friendship, romance, community membership, shared faith — to gain access to victims who would have rejected the same pitch from a stranger.

“Pig Butchering”

The term, awkward but vivid, is the one commonly used in the field for a class of schemes in which the perpetrator builds an emotional relationship with the victim over weeks or months before introducing any financial element. The initial contact may be through a dating application, a social network, an apparent wrong-number text message, or a professional networking site. The perpetrator presents himself or herself as an interesting, successful, attractive person, and invests substantial time in establishing rapport.

Eventually, financial topics arise naturally in the conversation. The perpetrator mentions an investment that has been going well. The victim expresses interest. The perpetrator demurs, then offers to help. The victim makes an initial small investment and is shown impressive returns. Larger investments follow. When the victim attempts to withdraw, complications arise; fees are demanded; the demands escalate; the relationship eventually evaporates along with the funds.

The schemes are operated, in many cases, by organized criminal enterprises with extensive infrastructure. Some involve workers who are themselves victims of human trafficking, compelled to operate the schemes under threat. The cruelty of the form is matched by its effectiveness; reported losses to these schemes have grown into the billions of dollars annually.

Community-Based Schemes

A related pattern involves the exploitation of trust within tight-knit communities. The perpetrator is a member of the community, or convincingly presents himself as one, and uses his standing to recruit fellow members into an investment scheme. The community context lowers the buyer’s natural caution: a person who would scrutinize a stranger’s pitch may accept a brother’s or a fellow congregant’s recommendation with little examination.

These schemes have been documented in immigrant communities, professional associations, religious congregations, and various other settings where strong bonds of mutual trust exist. They have appeared specifically in churches, where the perpetrator’s apparent piety serves the same function that wealth or credentials serve elsewhere — as a marker of trustworthiness that bypasses ordinary scrutiny. The Apostle Paul warned that men would creep into households and lead astray those who were weak; the warning applies with full force to financial deception conducted under the guise of fellowship.

The defense against affinity fraud is simple to state and difficult to practice: the trustworthiness of the messenger is not evidence of the soundness of the investment, and a sound investment can withstand the same scrutiny regardless of who recommends it. A brother who is genuinely offering a good opportunity will not be offended by careful questions; a brother who is offended by careful questions is offering something else.

Imposter Scams

These are schemes that exploit the names, images, and reputations of real people or organizations to lend false credibility to a fraudulent pitch.

Deepfaked Public Figures

Advances in video and audio generation have made it possible to produce convincing fake recordings of public figures appearing to endorse cryptocurrency products. These recordings circulate on social media platforms, often as advertisements, and direct viewers to fraudulent websites. The figures depicted — business executives, financial commentators, and others — have not endorsed the products and in many cases do not know their likeness is being used.

The defense is suspicion of any video advertisement for a financial product that depicts a famous person making claims that would be remarkable if true. Legitimate financial products almost never advertise through unsolicited celebrity endorsements on social media.

Fake Giveaways

A persistent pattern: a social media post or website announces that a major company, prominent individual, or cryptocurrency project is conducting a giveaway, in which participants who send a small amount of cryptocurrency to a specified address will receive a larger amount in return. The promise is sometimes presented as a promotional event, sometimes as a security verification, sometimes as a charitable initiative. The sent funds are never returned.

No legitimate giveaway has ever required participants to send funds first. The pattern is sufficiently universal that the request itself is conclusive evidence of fraud.

Fraudulent “Official” Communications

The final variant: communications that present themselves as official announcements from cryptocurrency projects, exchanges, or wallet providers, directing recipients to take specific actions — visit a particular site, sign a particular transaction, provide particular information. The communications are crafted to look authentic, often using the same logos, formatting, and language as genuine announcements. The actions they request, however, result in compromised accounts or drained wallets.

Legitimate projects communicate through their established official channels and rarely require urgent action from users. The combination of urgency and a request for sensitive action is, in this field, a near-universal indicator of fraud.

Red Flags That Cut Across Categories

A reader who has followed the survey to this point will have noticed that certain warning signs recur across nearly all of the categories. The recurrence is not accidental; it reflects the underlying logic that all of these schemes share. A short list of cross-cutting red flags can serve as a portable summary.

Guaranteed returns, or returns substantially above what legitimate markets pay. No legitimate investment offers guaranteed returns. Any pitch that does is, at best, misrepresenting risk, and is more likely a fraud.

Pressure to act quickly. Legitimate financial opportunities tolerate careful consideration. Fraudulent ones cannot, because careful consideration reliably uncovers them.

Unverifiable team or unverifiable underlying activity. A project whose team members cannot be confirmed to exist, or whose claimed underlying activity cannot be confirmed to take place, is presumptively fraudulent.

Custody held by the project itself. When the entity offering the investment also holds the customer’s funds, with no independent custodian, no audit, and no insurance, the customer is entirely dependent on the entity’s honesty. This dependence has been violated often enough that the structure itself should be viewed with skepticism.

Promises that do not survive the question “what is the source of the yield?” Every payment to an investor has to come from somewhere. If the source cannot be explained in concrete, verifiable terms, the most likely source is the deposits of later investors, which means a Ponzi scheme.

Unsolicited contact. Legitimate investment opportunities do not arrive through dating applications, wrong-number text messages, or private messages from strangers on social media. The unsolicited nature of the initial contact is itself diagnostic.

Requests for sensitive information. Seed phrases, private keys, and account passwords should never be shared with anyone, for any reason, under any pretext. The category of requests for these items contains zero legitimate cases.

Reliance on a single individual’s claims. When the entire case for an investment rests on one person’s representations — a charismatic founder, a trusted community member, a confident trader — and cannot be independently verified, the investment is functionally a bet on that individual’s character. Such bets, in this field, lose with unusual regularity.

Conclusion

The taxonomy presented here covers the great majority of cryptocurrency frauds in circulation, though the surface details continue to evolve and new variants appear regularly. A reader who has internalized the categories and the cross-cutting red flags is in a substantially better position than one who tries to evaluate each new scheme on its own terms. The underlying logic of fraud is older than cryptocurrency, older than the internet, older even than the modern financial system. Recognizing it requires not technical sophistication but ordinary wisdom applied steadily, with the humility to admit that one is not too clever to be deceived.

The Scriptures observe that the simple believes every word, but the prudent man looks well to his going. The categories in this paper are, in effect, a description of where the going requires the most careful looking.

The final paper in this series turns from recognition to participation. For the reader who has weighed the promise and the peril and has concluded that some measured involvement is right for him, what does prudent participation actually look like?


Notes

  1. The vocabulary of cryptocurrency fraud changes more rapidly than the underlying patterns. Readers encountering this paper at some distance from its writing should expect that the terminology has shifted in places, but the structures it describes are likely to remain recognizable. New names, in this field, are usually applied to old schemes.
  2. The figure of one percent per day, used in the discussion of high-yield investment programs, is chosen because it is the threshold at which a return becomes mathematically incompatible with any legitimate underlying activity. Programs offering substantially less than this can still be fraudulent, but the certainty grows with the promised rate.
  3. The phenomenon described as “pig butchering” appears to have originated in criminal enterprises operating from compounds in several countries in Southeast Asia, with workers in many cases trafficked from elsewhere and held in coercive conditions. The schemes have global reach, with victims documented in essentially every wealthy country and many less wealthy ones. The U.S. State Department and several humanitarian organizations have documented the human-trafficking dimensions of these operations; readers interested in pursuing that aspect will find the references useful.
  4. Affinity fraud within churches has a long history that predates cryptocurrency by centuries. The Apostle Paul’s warning in 2 Timothy 3 about men who creep into households is one ancient example; the warnings in 2 Peter 2 about false teachers who through covetousness make merchandise of the faithful are another. The patterns described in this paper are not new in kind, only in vehicle. Congregations would do well to apply the same prudence to financial recommendations from fellow members that they would apply to recommendations from strangers, and pastors would do well to remind their flocks of that prudence rather than to assume it.
  5. The remarks on deepfaked endorsements should not be read as suggesting that all video advertisements involving public figures are fraudulent. They are not. But the rate of fraudulent ones has grown enough that suspicion is the appropriate default, particularly when the advertised product is a financial one and the platform is one on which advertising standards are weakly enforced.
  6. The red flag concerning custody held by the project itself reflects one of the most consistent findings of post-mortem analyses of failed crypto platforms: customer funds were commingled with platform funds, used for purposes the customers did not know about, and could not be recovered when the platform failed. The structural separation of customer assets from platform assets that is taken for granted in regulated financial markets has been the exception rather than the rule in cryptocurrency markets so far.
  7. The recurring observation throughout this paper that legitimate operators do not engage in particular behaviors — never ask for seed phrases, never require funds to be sent before a giveaway, never offer guaranteed returns — is not a guarantee that every legitimate operator avoids every such behavior in every case. It is a statement about base rates: the population of entities engaging in these behaviors consists overwhelmingly of fraudulent ones, and treating the behaviors as decisive indicators is the strategy that produces the best outcomes for the user across the full range of cases.

References

Cross, C., Holt, K., & Powell, A. (2023). Understanding romance fraud: Insights from domestic violence research. British Journal of Criminology, 63(1), 1–17. https://doi.org/10.1093/bjc/azab108

Federal Bureau of Investigation, Internet Crime Complaint Center. (2024). Internet crime report 2023. U.S. Department of Justice. https://www.ic3.gov/AnnualReport/Reports/2023_IC3Report.pdf

Federal Trade Commission. (2024). Consumer Sentinel Network data book 2023. Federal Trade Commission. https://www.ftc.gov/reports

Gandal, N., Hamrick, J. T., Moore, T., & Oberman, T. (2018). Price manipulation in the Bitcoin ecosystem. Journal of Monetary Economics, 95, 86–96. https://doi.org/10.1016/j.jmoneco.2017.12.004

Hamrick, J. T., Rouhi, F., Mukherjee, A., Feder, A., Gandal, N., Moore, T., & Vasek, M. (2021). An examination of the cryptocurrency pump-and-dump ecosystem. Information Processing & Management, 58(4), Article 102506. https://doi.org/10.1016/j.ipm.2021.102506

Li, T., Shin, D., & Wang, B. (2021). Cryptocurrency pump-and-dump schemes. SSRN. https://doi.org/10.2139/ssrn.3267041

Mazza, M. F. (2022). Is crypto-property prone to fraud? Lessons from the collapse of major exchanges. Stanford Journal of Blockchain Law & Policy, 5(2), 88–117.

Mei, Y., Gül, M., & Bütün, İ. (2024). Detecting honeypot smart contracts: A multi-stage classification approach. IEEE Access, 12, 14523–14538. https://doi.org/10.1109/ACCESS.2024.3357821

Moore, T., & Christin, N. (2013). Beware the middleman: Empirical analysis of Bitcoin-exchange risk. In A.-R. Sadeghi (Ed.), Financial cryptography and data security (pp. 25–33). Springer. https://doi.org/10.1007/978-3-642-39884-1_3

North American Securities Administrators Association. (2023). Enforcement report 2023. NASAA. https://www.nasaa.org/industry-resources/enforcement/

Securities and Exchange Commission. (2023). Crypto assets and cyber enforcement actions. https://www.sec.gov/spotlight/cybersecurity-enforcement-actions

United Nations Office on Drugs and Crime. (2023). Casinos, money laundering, underground banking, and transnational organized crime in East and Southeast Asia. UNODC. https://www.unodc.org/roseap/

U.S. Department of Justice. (2023). International virtual currency money laundering enforcement actions. https://www.justice.gov/criminal/cryptocurrency

U.S. Department of State. (2024). Trafficking in persons report 2024. https://www.state.gov/trafficking-in-persons-report/

Vasek, M., & Moore, T. (2015). There’s no free lunch, even using Bitcoin: Tracking the popularity and profits of virtual currency scams. In R. Böhme & T. Okamoto (Eds.), Financial cryptography and data security (pp. 44–61). Springer. https://doi.org/10.1007/978-3-662-47854-7_4

Vasek, M., & Moore, T. (2018). Analyzing the Bitcoin Ponzi scheme ecosystem. In A. Zohar et al. (Eds.), Financial cryptography and data security (pp. 101–112). Springer. https://doi.org/10.1007/978-3-662-58820-8_8

Xia, P., Wang, H., Gao, B., Su, W., Yu, Z., Luo, X., Zhang, C., Xiao, X., & Xu, G. (2020). Trade or trick? Detecting and characterizing scam tokens on Uniswap decentralized exchange. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 5(3), Article 39. https://doi.org/10.1145/3491051

Xu, J., & Livshits, B. (2019). The anatomy of a cryptocurrency pump-and-dump scheme. In Proceedings of the 28th USENIX Security Symposium (pp. 1609–1625). USENIX Association.


Posted in Musings | Tagged , , , | Leave a comment

Why Crypto Is a Magnet for Fraud


Introduction

The first paper in this series argued that cryptocurrency emerged from serious questions about money, trust, and individual autonomy, and that it has partially delivered on some of its founding promises while falling short of others. That paper deliberately set aside one of the most striking features of the field: the extraordinary concentration of fraud within it.

The numbers are arresting. The Federal Trade Commission has reported that consumers in the United States lost more than a billion dollars to crypto-related scams in a single recent year, with median individual losses far higher than for any other payment method tracked by the agency. The FBI’s Internet Crime Complaint Center has documented annual crypto-related losses well into the multiple billions of dollars, and these figures almost certainly understate the true total, since most victims of these schemes do not report the loss to any authority.

A reasonable person encountering these figures may conclude that cryptocurrency is simply a scam by another name, and walk away. But that conclusion, while understandable, is too quick. Fraud has accompanied every major financial innovation in history, from the South Sea Bubble of 1720 to the railroad manias of the nineteenth century to the dot-com era of the late 1990s to the structured-credit boom of the 2000s. The relevant question is not whether crypto has produced fraud — every novel financial frontier has — but why this particular environment has produced so much of it, in such concentrated forms, and what that pattern reveals about the nature of the field.

This paper attempts to answer that question. It does so by examining two categories of factors: the structural features of the technology and markets themselves, and the cultural and cognitive features of the community that has grown up around them. The goal is explanation rather than condemnation. A reader who finishes this paper should understand why fraud thrives here in the way it does, and should be prepared for the next paper in the series, which surveys the specific forms it takes.

Structural Drivers

The structural features of cryptocurrency that make it useful for legitimate purposes are, in many cases, the same features that make it useful for fraudulent ones. This is not a coincidence, and understanding the connection is essential to understanding why the field cannot simply regulate or technologize its way out of the problem.

Irreversibility

The most fundamental structural feature is that cryptocurrency transactions, once confirmed, cannot be undone. In conventional finance, a fraudulent charge on a credit card can be disputed and reversed; a wire transfer made under duress can sometimes be recalled; a check can be stopped. These reversal mechanisms exist because the system has central parties — banks, card networks, clearinghouses — with the authority and the records to undo what has been done.

A cryptocurrency network has no such parties by design. A transaction is final the moment it is included in the ledger, and no court order, no bank manager, and no act of repentance by the perpetrator can recover the funds. This is precisely the property that makes cryptocurrency censorship-resistant, but it is also the property that makes theft of cryptocurrency permanent. A thief who tricks a victim into authorizing a transaction has, in that moment, won. The victim’s only remaining hope is that the thief later moves the funds to an exchange that can identify them and freeze the account — a possibility, but far from a guarantee.

Pseudonymity

Cryptocurrency addresses are not directly tied to real-world identities. A person can create a new address in seconds, without permission or paperwork, and use it to receive funds from anywhere in the world. Sophisticated analysis can often link addresses to individuals through patterns of behavior or interactions with regulated services, but the work is expensive, time-consuming, and far from always successful.

For ordinary users, pseudonymity offers genuine benefits — privacy from commercial surveillance, protection from theft of personal data, the ability to transact without permission from gatekeepers. For fraudsters, the same property dramatically lowers the risk of identification and prosecution. A scammer operating from a jurisdiction with weak law enforcement, behind layers of intermediary addresses, is for practical purposes beyond the reach of victims in another country.

Global, Always-Open Markets

Conventional financial markets operate within national borders, during business hours, under the supervision of specific regulators. Cryptocurrency markets operate globally, around the clock, and across jurisdictions that vary enormously in their regulatory capacity and willingness. An exchange may be incorporated in one country, served by employees in several others, used by customers in dozens, and supervised meaningfully by none.

This globality is part of what makes cryptocurrency useful — a person sending value across borders does not need to navigate the correspondent-banking system or wait for international wires to clear. But the same globality means that a fraudulent platform can serve victims worldwide while remaining outside the reach of any single regulator, and that even when a particular jurisdiction acts, the operators can simply move.

Near-Zero Cost of Token Creation

Creating a new cryptocurrency token, on most modern platforms, costs almost nothing. A modestly skilled programmer can deploy a new token in minutes, with whatever name, supply, and rules he chooses. Legitimate projects use this capability to launch novel products. Fraudulent projects use it to launch tokens that exist solely to extract money from buyers, often with rules embedded in the underlying code that allow the creators to drain the project’s funds at will.

In conventional securities markets, issuing a new instrument requires registration, disclosure, and the involvement of regulated intermediaries. These requirements impose costs, but they also create a paper trail and a set of accountable parties. Cryptocurrency removes the costs and, with them, the trail. The result is that the universe of “investable” crypto tokens has grown to include tens of thousands of items, most of which have no economic substance and many of which were created with predatory intent from the start.

Volatility as Cover

Legitimate cryptocurrency markets are genuinely volatile, with major assets routinely moving twenty percent in a day and minor ones moving far more. This volatility creates ideal cover for fraud. When a fraudulent token collapses to zero, the creators can plausibly claim that the market simply turned against the project. When a manipulated price action drives unsophisticated buyers in at the top of a coordinated pump, the subsequent collapse looks indistinguishable from ordinary market behavior. Volatility makes it harder for victims to recognize that they have been defrauded, and harder for authorities to distinguish fraud from misfortune.

Custodial Risk Concentrated in Lightly Regulated Firms

A great deal of cryptocurrency activity flows through exchanges and custodians — firms that hold customer funds and execute trades on their behalf. In principle, customers could hold their own funds directly, and some do. In practice, most do not, because direct custody is technically demanding and unforgiving of error.

The result is that vast pools of customer money sit in firms whose regulatory supervision ranges from substantial to nonexistent. When such a firm fails — whether through fraud, mismanagement, or simple incompetence — customers typically have no deposit insurance, no government backstop, and weak legal recourse. The failures of major exchanges over the past several years have collectively cost customers tens of billions of dollars, and the pattern has repeated often enough to suggest that the problem is structural rather than incidental.

Cultural and Cognitive Drivers

Structural features explain why fraud is feasible in cryptocurrency. They do not, by themselves, explain why so many people fall for it. For that, we need to look at the culture that has grown up around the technology, and at the cognitive patterns that culture exploits.

Narratives of Overnight Wealth

Cryptocurrency’s most powerful recruiting tool has been the story of early adopters who became wealthy. These stories are true in particular cases — a person who bought Bitcoin at a few dollars and held it for a decade did, in fact, become rich. The stories are also misleading, because they obscure the much larger number of participants who lost money, the role of luck in distinguishing winners from losers, and the difference between the early environment and the present one.

A person who hears the success stories without the context arrives at the market with unrealistic expectations and a willingness to take risks that a more sober assessment would rule out. Fraudsters understand this and design their pitches accordingly. The promise is rarely modest returns from a modest opportunity; it is life-changing wealth, available now, to those who act before the chance is gone.

Tribal Community Dynamics

Cryptocurrency communities, organized around particular tokens or platforms, often develop intense in-group cultures. Members refer to themselves with shared vocabulary, defend the project against critics, and celebrate price increases as collective achievements. The phrases that circulate within these communities — “diamond hands,” “to the moon,” “have fun staying poor” — function partly as encouragement and partly as social pressure against the questions a prudent person would naturally ask.

A newcomer who voices skepticism risks being mocked as ignorant, dismissed as a paid critic, or banned from the community’s online spaces. The result is that the spaces where someone might hope to find honest information are precisely the spaces most engineered to suppress it. Frauds that present themselves as communities — and many do — exploit this dynamic explicitly, building tight in-group bonds before introducing the financial pitch.

Influencer-Driven Information Flow

A great deal of information about specific cryptocurrencies reaches retail buyers through social media influencers — people with large audiences on platforms such as YouTube, X, TikTok, and Instagram who discuss tokens, projects, and trading strategies. Many of these influencers are paid, directly or indirectly, by the projects they discuss, and the disclosure norms that govern such relationships in conventional media are weak or absent here.

The result is that buyers often cannot distinguish independent analysis from paid promotion, and the appearance of broad enthusiasm for a particular token may in fact be the coordinated output of a single marketing budget. Several prominent enforcement actions have established that this is not a hypothetical concern but a regular feature of the industry. For an ordinary buyer trying to make a sound decision, the information environment is hostile in ways that are difficult to perceive from inside it.

Technical Complexity as a Shield

Genuine cryptocurrency systems are technically complex, and the vocabulary surrounding them is dense. This complexity serves legitimate purposes but also provides excellent cover for fraud. A pitch that includes references to “automated market makers,” “liquidity provision,” “yield optimization,” and “cross-chain bridges” can sound sophisticated to a buyer who has neither the time nor the background to evaluate the underlying claims.

Many frauds in the space involve mechanisms that, when examined by someone with the relevant expertise, are quickly recognizable as predatory. But the examination requires the expertise, and most buyers do not have it. The result is an information asymmetry far more severe than in conventional financial markets, where regulators and credentialed intermediaries provide some baseline of professional review.

Fear of Missing Out

Cryptocurrency markets are prone to dramatic, well-publicized price rallies, in which a particular asset’s price multiplies many times over a short period. These rallies, while real, are typically driven by a combination of genuine enthusiasm, speculative momentum, and coordinated promotion, and they are usually followed by equally dramatic declines.

For a person watching from outside, however, the rally is what is visible, and the message it conveys is that something extraordinary is happening that one is missing. This fear of missing out — sometimes abbreviated FOMO and so common in the field that the abbreviation is universal — is one of the most reliable tools fraudsters use to overcome the natural caution that would otherwise protect their targets. The pitch is structured to suggest that the opportunity is closing, that the buyer must act now, and that any delay is a failure of nerve rather than an exercise of wisdom.

The Information Gap

Underlying all of the cultural drivers is a basic asymmetry of information. In any market, some participants know more than others. In cryptocurrency markets, the gap is unusually wide and unusually exploitable.

Large holders of a particular token — sometimes called “whales” — know their own positions and intentions, and can move prices substantially with relatively small actions. Insiders at exchanges know which tokens are about to be listed, which generally causes prices to rise. Project founders know the actual state of their projects, including problems that have not been publicly disclosed. Market makers know the structure of order books and can anticipate the behavior of automated trading systems.

Retail buyers, in contrast, generally know none of these things. They make decisions based on price charts, social media sentiment, and the recommendations of influencers whose own positions and incentives are usually opaque. The result is a market in which the least-informed participants regularly transact with the best-informed, and in which the systematic transfer of wealth from the former to the latter is a structural feature rather than an unfortunate accident.

This is the deeper sense in which the field is hospitable to fraud. Even setting aside outright criminal schemes, the ordinary functioning of the market — through entirely legal coordinated promotion, listing announcements, and short-term trading — produces patterns of gain and loss that look uncomfortably similar to fraud. Distinguishing the legal extraction from the illegal becomes more a matter of which jurisdiction one consults than of the substance of what is happening.

A Comparative Note

It would be unfair to leave the impression that cryptocurrency is uniquely corrupt. Every major financial innovation in history has attracted fraud, often on a spectacular scale. The South Sea Bubble of 1720 ruined fortunes and helped produce Britain’s first major securities-fraud statute. The railroad manias of the 1840s and 1870s produced waves of fraudulent stock promotion. The 1920s saw both genuine innovation and the schemes of operators such as Charles Ponzi, whose name has become synonymous with the form. The dot-com era of the late 1990s saw billions of dollars raised for businesses that turned out to be empty. The structured-credit boom of the 2000s ended in a financial crisis whose effects are still being felt.

What distinguishes cryptocurrency is not the existence of fraud but its concentration and the unusual difficulty of countering it through conventional means. The combination of irreversibility, pseudonymity, global reach, near-zero token creation costs, and weak regulatory coverage gives fraud structural advantages that earlier eras did not provide. The cultural features — tribal communities, influencer economies, narratives of overnight wealth — amplify those advantages further.

This does not mean that crypto is doomed to be a fraud-dominated field forever. Earlier frontiers eventually matured, regulators caught up, and the most egregious forms of fraud retreated to the margins. Something similar may happen here in time. But it has not happened yet, and a person considering participation now must reckon with the field as it is rather than as it may someday become.

Conclusion

Cryptocurrency attracts fraud in unusual concentration because its defining features — irreversibility, pseudonymity, global reach, and minimal barriers to creating new assets — are nearly ideal conditions for predatory schemes, and because the culture surrounding it has developed in ways that suppress the questions and protect the predators rather than the prey. This is not a moral indictment of the technology, nor a claim that everyone in the field is dishonest. It is a sober description of a particular environment, offered so that the reader who chooses to enter it does so with eyes open.

The next paper in this series turns from the why to the what: a field guide to the specific forms that crypto fraud takes, organized as a taxonomy so that a reader can recognize the shape of a scheme even when the particular story is new.


Notes

  1. The Federal Trade Commission’s consumer-loss figures are drawn from its annual reports on fraud, which compile complaints filed by consumers and partner agencies. The figures should be read as lower bounds; most victims of consumer fraud do not report the incident, and the underreporting rate for crypto-related fraud is generally believed to be especially high because of shame, technical confusion, and skepticism that anything can be done.
  2. The FBI’s Internet Crime Complaint Center publishes an annual report summarizing complaint data. In recent years, the report has identified cryptocurrency-related fraud as one of the fastest-growing categories of online crime, with so-called “investment fraud” — the category that includes most large-loss crypto schemes — overtaking business-email compromise as the largest single source of reported losses.
  3. The historical examples in the comparative section are treated more fully in the works of Charles Kindleberger, Edward Chancellor, and Carlota Perez, all of whom argue that speculative manias and the frauds that accompany them are a recurring feature of major technological transitions rather than aberrations.
  4. The term “whale,” used in cryptocurrency contexts to refer to a holder of an unusually large position, is borrowed from gambling, where it refers to a high-stakes player whose bets can move the house’s exposure. The borrowed usage retains the implication that the whale’s actions are individually significant rather than just one among many small contributions.
  5. Several enforcement actions over the past decade have established that paid promotion of cryptocurrencies without proper disclosure constitutes a securities-law violation in jurisdictions where the token in question qualifies as a security. The Securities and Exchange Commission has brought such cases against a number of prominent influencers and celebrities. Whether any particular token is a security remains a contested legal question.
  6. The phrase “have fun staying poor,” common in some crypto communities, is directed at skeptics and is intended to suggest that anyone who declines to buy will be left behind by the wealth the speaker expects the asset to produce. Its function within the community is closer to a ritual chant than to a serious argument, but it serves the social purpose of marking dissenters as outsiders.
  7. The asymmetry of information described in the section on the information gap is sometimes referred to in market-microstructure literature as “informed-trader” or “toxic-flow” risk. The technical literature is concerned mainly with the costs to market makers; the present paper is concerned with the costs to retail participants, which are less studied but follow the same logic.

References

Chainalysis. (2024). The 2024 crypto crime report. Chainalysis. https://www.chainalysis.com/reports/

Chancellor, E. (1999). Devil take the hindmost: A history of financial speculation. Farrar, Straus and Giroux.

Cong, L. W., Li, X., Tang, K., & Yang, Y. (2023). Crypto wash trading. Management Science, 69(11), 6427–6454. https://doi.org/10.1287/mnsc.2021.02709

Federal Bureau of Investigation, Internet Crime Complaint Center. (2024). Internet crime report 2023. U.S. Department of Justice. https://www.ic3.gov/AnnualReport/Reports/2023_IC3Report.pdf

Federal Trade Commission. (2024). Consumer Sentinel Network data book 2023. Federal Trade Commission. https://www.ftc.gov/reports

Foley, S., Karlsen, J. R., & Putniņš, T. J. (2019). Sex, drugs, and Bitcoin: How much illegal activity is financed through cryptocurrencies? The Review of Financial Studies, 32(5), 1798–1853. https://doi.org/10.1093/rfs/hhz015

Gandal, N., Hamrick, J. T., Moore, T., & Oberman, T. (2018). Price manipulation in the Bitcoin ecosystem. Journal of Monetary Economics, 95, 86–96. https://doi.org/10.1016/j.jmoneco.2017.12.004

Griffin, J. M., & Shams, A. (2020). Is Bitcoin really un-tethered? The Journal of Finance, 75(4), 1913–1964. https://doi.org/10.1111/jofi.12903

Kindleberger, C. P., & Aliber, R. Z. (2015). Manias, panics, and crashes: A history of financial crises (7th ed.). Palgrave Macmillan.

Li, T., Shin, D., & Wang, B. (2021). Cryptocurrency pump-and-dump schemes. SSRN. https://doi.org/10.2139/ssrn.3267041

Lyandres, E., Palazzo, B., & Rabetti, D. (2022). Initial coin offering (ICO) success and post-ICO performance. Management Science, 68(12), 8658–8679. https://doi.org/10.1287/mnsc.2022.4312

Mackay, C. (1841). Memoirs of extraordinary popular delusions and the madness of crowds. Richard Bentley.

Makarov, I., & Schoar, A. (2022). Cryptocurrencies and decentralized finance (DeFi). Brookings Papers on Economic Activity, 2022(1), 141–215. https://doi.org/10.1353/eca.2022.0017

Perez, C. (2002). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Edward Elgar.

Securities and Exchange Commission. (2022). SEC charges Kim Kardashian for unlawfully touting crypto security [Press release]. https://www.sec.gov/news/press-release/2022-183

Shiller, R. J. (2015). Irrational exuberance (3rd ed.). Princeton University Press.

Vasek, M., & Moore, T. (2018). Analyzing the Bitcoin Ponzi scheme ecosystem. In A. Zohar et al. (Eds.), Financial cryptography and data security (pp. 101–112). Springer. https://doi.org/10.1007/978-3-662-58820-8_8

Xu, J., & Livshits, B. (2019). The anatomy of a cryptocurrency pump-and-dump scheme. In Proceedings of the 28th USENIX Security Symposium (pp. 1609–1625). USENIX Association.


Posted in Musings | Tagged , , , , | Leave a comment

The Promise of Cryptocurrency

Introduction

Few financial innovations of the past two decades have generated as much heat, and as little light, as cryptocurrency. To its advocates it is the foundation of a freer monetary order; to its critics it is a speculative mania wrapped in technical jargon. Both camps point at the same evidence and reach opposite conclusions, which suggests that the underlying questions are harder than either side admits.

This paper sets aside the cheerleading and the dismissal alike. The goal is to describe, in language a non-specialist can follow, what cryptocurrency is, where it came from, what genuine problems its inventors hoped to solve, and how well it has solved them so far. A reader who finishes this paper should be able to hold a conversation about crypto without either repeating marketing claims or relying on caricature. The three papers that follow will build on this foundation to examine why the ecosystem attracts so much fraud, what specific scams look like, and how a person who chooses to participate can do so with reasonable care.

A word on framing before we begin. It is tempting to ask “is cryptocurrency good or bad?” but that question is too coarse to be useful. A more productive set of questions is: what is the technology actually capable of, what human needs does it address, who has benefited and who has been harmed, and how should a thoughtful person weigh those factors when deciding whether to participate? Those are the questions this series tries to take seriously.

Origins: A Response to a Crisis

The first cryptocurrency, Bitcoin, was launched in January 2009 by a person or group using the pseudonym Satoshi Nakamoto. The timing was not accidental. Just months earlier, in September 2008, the collapse of Lehman Brothers had touched off the worst financial crisis since the 1930s. Banks that had been considered pillars of the global economy required taxpayer-funded rescues. Central banks around the world began aggressive programs of monetary expansion. Millions of ordinary people lost homes, jobs, and savings, while many of the institutions that had caused the crisis remained largely intact.

Nakamoto’s first published description of the system, a nine-page document now called the Bitcoin whitepaper, framed the project as a response to a specific problem: the requirement to trust third parties when transferring money electronically. In the conventional system, a bank stands between any two parties wishing to transact, and that bank can freeze accounts, reverse transfers, charge fees, fail outright, or be compelled by governments to act against its customers. Nakamoto proposed a system in which no such intermediary was necessary — one in which strangers could send value directly to one another with the same finality as handing over cash, but across any distance.

The intellectual roots of the project ran deeper than the 2008 crisis. For roughly two decades before Bitcoin, a loose movement called the cypherpunks had been experimenting with using cryptography to defend individual privacy and autonomy against both governments and corporations. Earlier attempts at digital cash — DigiCash, e-gold, Hashcash, b-money, Bit Gold — had each solved part of the puzzle but never the whole. Nakamoto’s contribution was to combine existing cryptographic techniques in a way that solved the so-called double-spending problem without any central authority keeping the books.

Whatever one thinks of Bitcoin today, it is worth noting that it emerged from a serious intellectual tradition asking serious questions: who should have the power to create money, who should have the power to freeze it, and what does financial privacy mean in a digital age? Those questions did not begin with Bitcoin, and they will not end with it.

The Technical Claims, Stripped of Marketing

Crypto discussions often bog down in vocabulary. A handful of concepts, properly understood, will carry a reader through most of what matters.

A distributed ledger is a record of transactions kept simultaneously by many computers around the world rather than by a single institution. When you wire money through a bank, the bank’s internal database is the authoritative record. When someone sends Bitcoin, the record of that transaction is held in identical copies on tens of thousands of independent machines. There is no master copy, and no single party can quietly edit the past.

Cryptographic signatures allow a person to prove ownership of funds without revealing the secret that grants that ownership. Each user holds a private key — essentially a very large, very secret number — and from it derives a public address that others can send money to. A signature made with the private key can be verified by anyone using only the public address, but the key itself never leaves the owner’s possession (assuming the owner has handled it competently, which, as later papers will discuss, many do not).

A consensus mechanism is the procedure by which the network of independent computers agrees on which transactions are valid and in what order. Bitcoin uses “proof of work,” in which computers compete to solve computationally expensive puzzles, and the winner records the next batch of transactions. Other systems use “proof of stake,” in which the right to record transactions is awarded based on how much of the network’s currency a participant has pledged as collateral. Both approaches aim to make it economically irrational for any single party to cheat.

Decentralization is the word that does the most work in crypto marketing, and it deserves the closest scrutiny. In theory, a cryptocurrency network is decentralized when no single party can control it, censor it, or shut it down. In practice, decentralization exists on a spectrum and changes over time. A network may be technically decentralized — with thousands of independent nodes — while being effectively centralized in other ways, such as having most of its mining power concentrated in a handful of pools, most of its tokens held by a small number of wallets, or most of its development controlled by a small team. Honest assessment requires asking decentralized along which axis? rather than treating the word as a binary.

These four concepts — distributed ledgers, cryptographic signatures, consensus mechanisms, and decentralization — are sufficient to understand what cryptocurrencies claim to be. They are not sufficient to understand whether any particular cryptocurrency actually delivers on those claims, which is a separate question requiring evidence rather than vocabulary.

Three Things, Often Confused

Perhaps the most useful distinction a newcomer can learn is that “crypto” refers to at least three different things, and that conversations go badly when participants are talking about different ones without realizing it.

The technology is the set of cryptographic and networking techniques that make distributed ledgers possible. These techniques have applications well beyond money — supply-chain tracking, identity verification, document timestamping, and others — and would remain interesting even if every existing cryptocurrency disappeared tomorrow.

The asset class is the collection of tokens (Bitcoin, Ether, and thousands of others) that trade on global markets and that people buy in hopes of appreciation or use for transactions. The asset class is what most retail participants mean when they say they are “in crypto.” Its behavior — extreme volatility, correlation with risk assets in some periods and divergence in others, susceptibility to manipulation — is a financial-markets question, not a technology question.

The industry is the constellation of exchanges, custodians, lenders, marketing firms, influencers, venture funds, conferences, and media outlets that have grown up around the technology and the asset class. The industry is where most of the bad behavior tends to concentrate, because it is the layer at which large amounts of money meet large amounts of human ambition and inadequate regulation.

A person can be enthusiastic about the technology while skeptical of the asset class, or interested in the asset class while distrustful of the industry, or any other combination. Public arguments often founder because one party defends the technology while the other attacks the industry, and neither realizes they are not actually disagreeing.

The Real Problems Crypto Attempts to Address

Setting aside speculation and marketing, several genuine problems motivated the creation of cryptocurrencies and continue to motivate serious work in the field. A fair-minded observer can take these problems seriously without committing to any particular solution.

Censorship resistance. Conventional payment systems can be used as instruments of policy. Banks can freeze accounts at the direction of governments, and payment processors can decline to serve customers they find objectionable. Sometimes this power is used well, against criminals and bad actors. Sometimes it is used poorly, against dissidents, unpopular minorities, or ordinary people caught in bureaucratic errors. A monetary system in which no central party can freeze funds is attractive to anyone who has been on the receiving end of such treatment, and the world contains a great many such people.

Currency debasement. Throughout history, governments under fiscal pressure have been tempted to expand the money supply, and the cumulative effect on savers can be severe. The dramatic monetary expansion that followed the 2008 crisis, and the even more dramatic expansion during the COVID-19 pandemic, made this concern vivid for many people who had previously paid little attention to monetary policy. A monetary asset with a mathematically fixed supply — as Bitcoin claims to be — appeals to those who fear that traditional currencies will lose purchasing power over the long run.

Financial inclusion. Roughly one and a half billion adults worldwide lack access to formal banking services, often because the cost of serving them is uneconomic for traditional institutions or because they live in places where banking infrastructure is weak. A monetary system that requires only a phone and an internet connection could, in principle, reach many of them. The actual record on this front is mixed and worth examining honestly, but the aspiration is serious.

Cross-border friction. Sending money internationally through conventional channels remains expensive, slow, and opaque. Migrant workers sending wages home to their families lose, on average, around six percent of each transfer to fees, and transfers can take days to settle. Crypto-based remittance, where it works, can reduce both the cost and the delay substantially.

Programmable settlement. Beyond simple transfers, some cryptocurrency platforms allow contracts to be encoded directly into the network — so that a payment can be made automatically when a verifiable condition is met, without requiring a lawyer, escrow agent, or court. The implications for fields ranging from insurance to international trade are significant, though the practical realization is still early and the failure modes are still being discovered.

None of these problems is fully solved by current cryptocurrency systems. But each is a real problem, and dismissing the entire field requires either denying that the problems matter or believing that conventional systems will eventually solve them on their own. Neither position is obviously correct.

An Honest Scorecard

After roughly fifteen years of operation, what has cryptocurrency actually delivered?

The clearest success has been the simple fact of survival. Bitcoin has now processed transactions continuously for over a decade and a half, through booms, crashes, regulatory crackdowns, and the failures of many companies built around it. The base protocol has not been successfully attacked. For a system that began as an experiment by an anonymous author, that is a remarkable record.

For very large transfers, especially across borders, cryptocurrency has demonstrated genuine utility. A bank wire of ten million dollars from one continent to another involves substantial fees, multiple intermediaries, and often several days of settlement time. The same transfer in Bitcoin can be completed in under an hour for a fee measured in dollars, not thousands of dollars.

As a store of value, the picture is more mixed. Bitcoin has appreciated dramatically over its history, rewarding patient early holders, but it has also experienced repeated drawdowns of seventy percent or more. Whether something can serve as a store of value while losing most of its purchasing power every few years is a genuinely contested question. Defenders point to the long-term trend; critics point to the volatility along the way. Both are looking at the same data.

The promise of everyday consumer payments has largely not materialized. Most people who hold cryptocurrency do not spend it; they hold it as an investment. Transaction speeds on the major networks remain too slow for point-of-sale use, and the volatility makes pricing in cryptocurrency awkward. Various “layer two” solutions aim to address these issues, with partial success.

Financial inclusion has progressed in specific places where conditions are favorable — notably some remittance corridors and some countries with badly mismanaged national currencies — but the broader promise of banking the unbanked has been hindered by the very volatility and technical complexity that make crypto unsuitable for users with little margin for error.

Decentralization itself, examined closely, has been harder to maintain than early advocates expected. Mining for the largest networks has concentrated in a small number of industrial operations. Ownership of most tokens is heavily skewed toward early holders and large funds. Development of the major protocols is led by relatively small teams. The system is more decentralized than the conventional banking system, but less decentralized than its founding rhetoric suggested.

A fair summary, then, is that cryptocurrency has partially delivered on some of its promises, failed to deliver on others, and produced a great many unintended consequences along the way. That is roughly the historical pattern for major technological innovations, and it is what one might reasonably have predicted at the outset for anyone willing to set enthusiasm aside.

Conclusion

Cryptocurrency is neither the salvation of money nor an elaborate hoax. It is a serious attempt to solve real problems, using novel technology, in an environment that has attracted both genuine builders and a great many opportunists. Understanding it requires distinguishing the technology from the asset class from the industry, taking the underlying problems seriously without committing to any particular solution, and accepting that the scorecard so far is mixed rather than decisive.

The next paper in this series turns to a question that the present paper has deliberately set aside: why has this particular field attracted such an unusual concentration of fraud, and what does that tell us about how to participate wisely if we choose to participate at all?


Notes

  1. The Bitcoin whitepaper, properly titled Bitcoin: A Peer-to-Peer Electronic Cash System, was first circulated on a cryptography mailing list on October 31, 2008. The genesis block of the Bitcoin network was mined on January 3, 2009, and famously contained a reference to a newspaper headline of that day concerning a second round of bank bailouts — generally read as a comment on the system Bitcoin was meant to provide an alternative to.
  2. The identity of Satoshi Nakamoto remains unknown. Various individuals have been proposed or have claimed the identity, but none has been confirmed by the cryptographic proof — signing a message with one of the original private keys — that would settle the matter. The author’s anonymity is itself a relevant feature of the project’s history.
  3. The term “cypherpunk” was coined in the late 1980s and refers to a loose group of cryptographers, programmers, and political thinkers who advocated the use of strong cryptography as a means of protecting individual privacy. The cypherpunk mailing list, active from 1992 onward, contains many of the early discussions that would eventually influence cryptocurrency design.
  4. The double-spending problem refers to the fact that digital information can be copied trivially, so without some mechanism for ensuring that the same unit of digital currency cannot be spent twice, no purely digital cash system can function. Conventional electronic payment systems solve this by having a trusted central party (a bank) maintain authoritative records. Distributed ledgers solve it without such a party — Nakamoto’s central technical contribution.
  5. Statistics on global financial inclusion are drawn from the World Bank’s Global Findex Database, which is updated periodically and is the most widely cited source on the unbanked population. Figures on remittance costs are from the World Bank’s Remittance Prices Worldwide database.
  6. The phrase “not your keys, not your coins,” widely used in the cryptocurrency community, captures the principle that funds held by an exchange or other custodian on a user’s behalf are functionally promises by that custodian rather than direct holdings, and are subject to the custodian’s solvency and honesty. Paper 4 in this series will treat the implications of this principle in depth.
  7. The historical pattern of major technological innovations producing both real benefits and unintended consequences has been the subject of substantial scholarly work; readers interested in pursuing the comparison further may find the references to Carlota Perez and to Edward Tenner especially useful.

References

Antonopoulos, A. M. (2017). Mastering Bitcoin: Programming the open blockchain (2nd ed.). O’Reilly Media.

Auer, R., & Tercero-Lucas, D. (2022). Distrust or speculation? The socioeconomic drivers of U.S. cryptocurrency investments. Journal of Financial Stability, 62, Article 101066. https://doi.org/10.1016/j.jfs.2022.101066

Böhme, R., Christin, N., Edelman, B., & Moore, T. (2015). Bitcoin: Economics, technology, and governance. Journal of Economic Perspectives, 29(2), 213–238. https://doi.org/10.1257/jep.29.2.213

Buterin, V. (2014). A next-generation smart contract and decentralized application platform [Ethereum whitepaper]. Ethereum Foundation. https://ethereum.org/en/whitepaper/

Carter, N. (2021). How much energy does Bitcoin actually consume? Harvard Business Review. https://hbr.org/2021/05/how-much-energy-does-bitcoin-actually-consume

Demirgüç-Kunt, A., Klapper, L., Singer, D., & Ansar, S. (2022). The Global Findex Database 2021: Financial inclusion, digital payments, and resilience in the age of COVID-19. World Bank. https://doi.org/10.1596/978-1-4648-1897-4

Eichengreen, B. (2019). From commodity to fiat and now to crypto: What does history tell us? (NBER Working Paper No. 25426). National Bureau of Economic Research. https://doi.org/10.3386/w25426

Greenberg, A. (2012). This machine kills secrets: How WikiLeakers, cypherpunks, and hacktivists aim to free the world’s information. Dutton.

Levy, S. (2001). Crypto: How the code rebels beat the government, saving privacy in the digital age. Viking.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf

Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2016). Bitcoin and cryptocurrency technologies: A comprehensive introduction. Princeton University Press.

Perez, C. (2002). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Edward Elgar.

Popper, N. (2015). Digital gold: Bitcoin and the inside story of the misfits and millionaires trying to reinvent money. Harper.

Schär, F. (2021). Decentralized finance: On blockchain- and smart contract-based financial markets. Federal Reserve Bank of St. Louis Review, 103(2), 153–174. https://doi.org/10.20955/r.103.153-74

Tenner, E. (1996). Why things bite back: Technology and the revenge of unintended consequences. Knopf.

World Bank. (2024). Remittance Prices Worldwide quarterly (Issue 49). World Bank Group. https://remittanceprices.worldbank.org

Yermack, D. (2015). Is Bitcoin a real currency? An economic appraisal. In D. Lee Kuo Chuen (Ed.), Handbook of digital currency: Bitcoin, innovation, financial instruments, and big data (pp. 31–43). Academic Press. https://doi.org/10.1016/B978-0-12-802117-0.00002-3


Posted in Musings | Tagged , , , | Leave a comment

Field Identity, Public Engagement, and Self-Critique: The Reflexive Commitments of Neglect Studies


Executive Summary

This paper concludes the series by addressing the questions that the preceding papers have deferred: what the field is to be called, how it should present itself to non-specialist audiences, how it should distinguish its work from adjacent enterprises that share some of its concerns but operate by different standards, and how it should examine its own attention patterns with the same scrutiny it applies to other fields. The paper proceeds from the premise that a field whose central business is the rigorous identification of neglected questions cannot afford to be itself a producer of unfounded claims, of unexamined assumptions, or of the same blind spots it documents elsewhere.

The paper develops seven arguments. The first is that the naming question, deferred throughout the series, must be settled deliberately because the name shapes how the field is received by external audiences and what it becomes internally. The second is that public-facing scholarship is intrinsic to the field’s mission rather than incidental to it, and that the public engagement must be conducted with the same methodological seriousness as the field’s scholarly work. The third is that the field must distinguish its rigorous identification of neglect from grievance scholarship, contrarianism, and conspiracy thinking, and that the distinction depends on professional discipline that the field’s institutions must support. The fourth is that internal pluralism — the field’s hospitality to scholars whose underlying commitments differ — is a methodological requirement rather than a political accommodation, since the patterns of neglect cut across the political spectrum and a field that addresses neglect on only one side will misidentify the patterns it studies. The fifth is that the field requires a built-in reflexive program to examine its own attention patterns, with the program operating through established procedures rather than depending on the goodwill of individual scholars. The sixth is that sunset and renewal mechanisms are appropriate to a field whose success in particular domains should sometimes work itself out of a job. The seventh is that the long-term success criteria for the field should be the measurable redistribution of attention in the wider scholarly ecosystem, rather than the growth of the field itself.

The paper concludes with a synthesis of the series and a discussion of the next steps that the founding scholars of the field should consider.


1. Introduction

The preceding seven papers have specified the field’s conceptual framework, its methodological standards, its academic infrastructure, its funding strategy, its data resources, its workforce, and its engagement with research-governing institutions. The specifications have been substantial and the proposals have been many, but the series has consistently deferred a set of questions that the field’s character ultimately depends on. What is the field called, and what does the name commit it to? How does the field speak to audiences beyond its own scholars? How does the field distinguish its work from enterprises that look similar from outside but operate by different standards? How does the field examine itself with the rigor it applies to others?

The questions are deferred not because they are unimportant but because they cannot be answered until the foundation is in place. A field’s name, its public presentation, its boundaries against adjacent enterprises, and its reflexive commitments depend on what the field actually is, and the preceding papers have specified what the field is to be. The conclusion of the series can now address what the field is to become as it presents itself to the wider world.

The argument of this paper is that the reflexive commitments are not optional additions to the field’s substantive work; they are constitutive of it. A field whose central business is the rigorous identification of neglected questions cannot afford to be itself a producer of unfounded claims, of unexamined assumptions, or of the same blind spots it documents elsewhere. The reflexive commitments operate as professional discipline, as institutional design, and as cultural maintenance. They are demanding to sustain, but the field’s credibility depends on them, and the credibility is the foundation on which everything else the series has proposed rests.

The paper proceeds through the seven questions identified above, addresses the synthesis of the series as a whole, and concludes with the practical next steps that the field’s founding scholars should consider.

2. The Naming Question

The working term used throughout this series has been neglect studies, with the acknowledgment in Paper 1 that the term is provisional and that the field’s eventual name remains an open question. The question must now be addressed.

Several candidate names have been used in adjacent literatures or could be proposed for the field. Each carries implications, and the choice among them shapes both how the field is received externally and what it becomes internally.

Agnotology, as developed by Robert Proctor and Londa Schiebinger, has the advantages of an established intellectual tradition, a substantial literature, and a recognizable name in the relevant scholarly circles. The case for adopting the term and treating the proposed field as an expansion of agnotology is that the conceptual core is shared, that the methodological tools overlap substantially, and that the broader recognition of agnotology in the science studies community provides an entry point that a wholly new name would lack. The case against is that agnotology’s center of gravity has settled firmly on strategic ignorance — the manufactured doubt produced by interested actors — while the larger and arguably more consequential category of passive neglect has received much less attention. The field’s expansion of the agnotology agenda may sit uncomfortably within the existing literature, and the term may signal to external audiences a different set of commitments than the field actually maintains.

Undone-science studies, drawing on the Hess and Frickel tradition, has the advantages of a methodologically sophisticated literature, an explicit attention to civil-society challenges to research-priority decisions, and a name that captures one of the field’s central concerns. The case against is that the term has been associated primarily with environmental and public-health contexts and with cases where organized constituencies exist to identify the gap, while the field’s scope extends well beyond these. The term may also signal an activist orientation that the field’s methodological standards do not support, and the field would need to expand both the scope and the disposition that the term implies.

Attention studies has the advantage of plain English and the disadvantage of being already used for several different scholarly enterprises in cognitive science, media studies, and elsewhere. The collision of meanings would generate persistent confusion, and the name probably should be rejected on those grounds alone.

Epistemic gap analysis has the advantage of methodological transparency — it announces what the field does in terms that scholars from other disciplines can readily understand — and the disadvantage of clinical neutrality that may not serve the field’s public engagement. The name reads as a technical specialty rather than as a field with its own intellectual identity, and the field’s broader ambitions may be poorly served by it.

Neglect studies, the working term, has the advantages of plain English, of capturing what the field does in terms that non-specialists can immediately understand, and of not colliding with established uses in other contexts. The disadvantages are that the term sounds less methodologically sophisticated than the field aspires to be, that it carries connotations of complaint or grievance that the field must explicitly distance itself from, and that it does not connect the field to the established traditions on which it draws.

The recommendation is that neglect studies should be retained as the field’s working name, with the explicit understanding that the name’s plainness is a virtue rather than a limitation. The field’s methodological sophistication should be demonstrated through the work rather than signaled by the name, and the plainness of the name has the practical advantage of accessibility to the public and policy audiences whose engagement the field requires. The name’s potential connotations of grievance must be addressed through the field’s professional discipline rather than avoided through more technical naming. The connection to established traditions can be maintained through citation practice, through institutional partnerships, and through the work’s substantive engagement with the literatures on which it draws.

The recommendation is offered with the recognition that other choices are defensible and that the field’s founding scholars may settle on a different name. What matters more than the specific choice is that the choice be made deliberately, with attention to its implications, and that the choice be sustained consistently across the field’s outputs once made. A field that drifts among multiple names in its early years confuses external audiences and disadvantages itself in the consolidation that establishes a field’s identity.

3. Public-Facing Scholarship

The field’s public engagement is intrinsic to its mission rather than incidental to it. The argument has three components.

The first is that the field’s findings have public implications. To say that the research system has misallocated attention in particular ways is to say something with consequences for how public resources are spent, for what questions are addressed in policies that affect people’s lives, and for what voices are heard in the production of knowledge that shapes public understanding. The implications are not always obvious or immediate, but they are real, and the public has legitimate interests in the field’s findings.

The second is that the public is itself a constituency for some of the field’s most important work. The constituency-less-questions category in the Paper 1 taxonomy points to cases where the absent parties are members of the public whose concerns have not been adequately represented in scholarly research. The field’s engagement with these cases requires engagement with the publics whose interests are involved, both to inform the public about the patterns of neglect that affect them and to learn from the public about the questions that scholarly research has failed to address.

The third is that the public is the ultimate source of the legitimacy on which the research system depends. The funding that supports research, the institutional autonomy that universities enjoy, the standing of scholarly expertise in policy decisions: all of these depend on public support that can be withdrawn if the public concludes that the research system has failed to serve its interests. A field that examines the patterns of attention in scholarly inquiry has a specific contribution to make to the public’s understanding of the research system, and the contribution serves both the public’s interests and the research system’s long-term legitimacy.

The public-facing scholarship that the field requires must be conducted with the same methodological seriousness as the field’s scholarly work. The standards include accuracy in the representation of findings, explicit acknowledgment of the limitations and uncertainties that the underlying work carries, attention to the framing of findings in ways that the public can use, and avoidance of the sensationalism that exaggerates the field’s claims for the sake of attention.

The specific forms of public engagement the field should pursue include several. The dashboard introduced in Paper 5 provides a public-facing data resource that the public can use to explore the patterns of attention in research areas of interest. The journalism partnerships that translate the field’s findings into accessible forms can amplify the field’s reach beyond what the field’s own scholars can accomplish. The participation of the field’s scholars in broader public conversations about science and research policy provides opportunities to bring the field’s perspective to discussions that would otherwise proceed without it. The educational materials that introduce the field’s questions to students and general audiences support the broader public literacy on which the field’s longer-term standing depends.

The risks of public engagement are familiar from many scholarly fields and require explicit attention. The first risk is the simplification of findings in ways that misrepresent the underlying work. The risk is sometimes unavoidable in genuinely public-facing communication, but it must be managed by careful attention to how findings are framed, by explicit acknowledgment of the simplifications when they occur, and by the availability of fuller treatments for audiences who want to engage with the work in more depth. The second risk is the misuse of findings by actors with their own agendas. The risk cannot be eliminated, but it can be reduced by the field’s own clarity about what its findings support and do not support, and it can be managed by the field’s willingness to correct misuses when they occur. The third risk is the distortion of the field’s research agenda by the demands of public attention, with the field’s scholars finding themselves drawn toward the topics that generate public interest at the expense of less visible but equally important work. The corrective is the maintenance of the field’s professional standards independently of public attention patterns, with the understanding that not all of the field’s important work will receive public attention and that the field’s value is not measured by its visibility alone.

4. Distinguishing the Field from Adjacent Enterprises

The field must distinguish its work from adjacent enterprises that share some of its concerns but operate by different standards. The distinction is essential to the field’s credibility and to its capacity to engage productively with the broader research-policy environment.

Three adjacent enterprises deserve specific treatment.

4.1 Grievance Scholarship

Some scholarship that identifies cases of neglect operates as a vehicle for advancing the substantive interests of its authors rather than as a methodologically careful identification of patterns in the research system. The scholarship may be sincere — the authors may genuinely believe that the cases they identify are neglected — but the methodological standards are typically lower than the field requires, and the conclusions tend to align predictably with the authors’ substantive commitments rather than emerging from independent analysis.

The distinction between grievance scholarship and rigorous identification of neglect depends on the methodological standards developed in Paper 2 and on the professional norms developed across the series. The field’s outputs should be evaluated against the tiered evidence standard, with the higher tiers requiring triangulation across methods and explicit consideration of the alternative explanations for apparent neglect. The field’s professional culture should reward methodologically careful work that produces conclusions the author would not have predicted in advance, and should discount work whose conclusions track the author’s prior commitments without independent evidentiary support.

The field’s institutional structures must be designed to enforce the distinction. The journal’s editorial standards must require submissions to meet the methodological criteria appropriate to the tier claimed, with editors and reviewers trained to recognize the difference between rigorous identification of neglect and substantive advocacy in methodological clothing. The registry’s evidence-tier system must be applied consistently, with entries that do not meet the documentation requirements either declined or accepted only at the exploratory tier with explicit labeling. The professional association’s standards must require members to maintain the methodological practices that distinguish the field’s work from grievance scholarship.

The enforcement is delicate because the boundary is not always clear and because the field’s scholars will sometimes have substantive commitments that bear on the questions they study. The distinction is not that the field’s scholars must lack commitments; it is that the work must meet the methodological standards regardless of the commitments. A scholar who has substantive interest in a particular case of neglect can still produce rigorous work on that case, provided the work is conducted by methods that would be persuasive to scholars who do not share the interest and that explicitly considers the alternative explanations for the apparent neglect. The professional discipline is to meet that standard consistently, even when the substantive commitments make it tempting to relax.

4.2 Contrarianism

A second adjacent enterprise that the field must distinguish itself from is contrarianism — the disposition to oppose established positions because they are established rather than because the evidence supports the opposition. Contrarianism shares with neglect studies an interest in questions that established communities have not addressed, but the contrarian’s motivation is the rejection of established authority rather than the methodologically careful identification of patterns in attention.

The distinction matters because the field’s work can superficially resemble contrarianism. A scholar who identifies an area as neglected is implicitly questioning the priorities of the established research community in that area, and the questioning can be misread as contrarian opposition. The misreading damages the field’s credibility with the established communities whose cooperation it requires, and the field must accordingly distinguish itself from contrarianism explicitly.

The distinction depends on the methodological standards and on the field’s professional culture. The field’s work proceeds from analysis to conclusions, with the analysis conducted by methods that the relevant scholarly communities recognize as appropriate. The contrarian proceeds from a disposition to oppose established positions, with the analysis serving to justify a conclusion that the disposition predetermined. The two can produce findings that look similar at a glance, but the methodological structure that produces them is different, and the field’s outputs must make the methodological structure explicit.

The field’s engagement with established communities should also include explicit acknowledgment of what those communities have done well. A neglect-studies analysis that identifies a particular case of misallocation in a discipline is more credible when it situates the case against an accurate understanding of the discipline’s achievements and constraints than when it presents the case as evidence of the discipline’s general failure. The framing is not strategic accommodation; it is methodologically appropriate, since the patterns of attention always reflect both genuine accomplishments and the structural distortions that the field identifies, and accurate analysis must capture both.

4.3 Conspiracy Thinking

The third adjacent enterprise the field must distinguish itself from is conspiracy thinking — the tendency to attribute patterns in the research system to the deliberate coordination of interested actors who suppress particular questions for their own purposes. Conspiracy thinking shares with neglect studies an interest in the mechanisms by which attention is allocated, but the structure of explanation differs in important ways.

The field’s account of the mechanisms of neglect, developed across the preceding papers, emphasizes structural factors: funding incentives, prestige hierarchies, methodological habits, disciplinary boundaries, and historical contingencies. The mechanisms operate without anyone necessarily intending the patterns they produce, and the corrective interventions accordingly operate on the structures rather than on the supposed agents of suppression. The conspiracy account, by contrast, emphasizes intentional coordination: identified actors who deliberately suppress particular questions for purposes the conspiracy theorist can articulate.

The two accounts can sometimes apply to the same cases, and the agnotology literature has documented cases in which industrial actors have deliberately suppressed research findings that threatened their commercial interests. The field’s work should not deny that such cases exist; the literature on tobacco, on climate, and on other documented cases of strategic ignorance is solid, and the field’s scholars should engage with it on its merits. The distinction is that the field’s analytical default should be structural explanation, with intentional coordination introduced as an explanation only when the evidence specifically supports it. The default reflects both the structural realities of the research system, in which most patterns are produced by uncoordinated incentive structures rather than by deliberate suppression, and the methodological discipline that the field’s credibility requires.

The risk of conspiracy thinking is particularly acute for the field because the work attracts audiences who are predisposed toward conspiratorial interpretations. The audiences include scholars whose own work has been received poorly and who are inclined to attribute the reception to deliberate suppression rather than to methodological or substantive limitations of the work; advocacy organizations whose interests are served by framing research-policy decisions as the products of deliberate manipulation; and members of the public whose distrust of scientific institutions makes conspiratorial explanations attractive. The field’s engagement with these audiences must include the patient maintenance of the structural-explanation default and the explicit rejection of conspiratorial interpretations that the evidence does not support.

5. Internal Pluralism

The field’s hospitality to scholars whose underlying commitments differ is a methodological requirement rather than a political accommodation. The argument has three components.

The first is that the patterns of neglect cut across the political spectrum. Some neglected questions bear on concerns that are typically associated with the political left — health disparities, environmental injustice, the underrepresentation of women’s health questions in clinical research. Other neglected questions bear on concerns typically associated with the political right — the effects of family structure on child outcomes, the predictive validity of certain psychological constructs, the long-term consequences of particular policy interventions. Yet other questions cut across the political spectrum or are not naturally located on it at all — many questions in foundational science, in the humanities, and in the history of knowledge. A field that addresses only the questions associated with one political position will miss the patterns that cross the spectrum, and the partial coverage will reduce the field to a vehicle for advocacy rather than a genuine scholarly enterprise.

The second is that the field’s empirical credibility depends on demonstrating that its identifications of neglect are not driven by the political commitments of its scholars. A field whose findings consistently align with the commitments of one political position will be received as ideological rather than scholarly, and the reception will be appropriate to the pattern. The credibility requires the field to include scholars with diverse commitments, to evaluate work by methodological standards rather than by political affinity, and to recognize cases of neglect across the political spectrum on the same terms.

The third is that the methodological maturity the field requires is supported by intellectual diversity. A field whose scholars share substantive commitments tends to develop blind spots that scholars with different commitments would have noticed. The diversity is not a substitute for methodological rigor — diverse scholars can still produce poor work — but it is a condition for the rigor to operate effectively, since the methodological scrutiny depends on perspectives that can identify weaknesses that the scholars themselves do not see.

The implications for the field’s institutional structures are several. The editorial board of the flagship journal should include scholars with diverse commitments, with the diversity attended to deliberately at the founding rather than assumed to emerge naturally. The recruitment of doctoral students and early-career scholars should not select implicitly for particular commitments, and the field’s professional culture should welcome scholars across the spectrum on the same terms. The engagement with research-governing institutions should be conducted in ways that do not privilege the concerns of any particular political position. The reflexive program discussed below should specifically examine whether the field’s attention patterns show the kinds of political asymmetries that would compromise the field’s credibility.

The implications are demanding, and the field will face pressure against them. The pressure will come from scholars whose own commitments make particular cases of neglect more visible to them than others, from audiences who want the field to support their substantive positions, and from the broader political environment in which scholarly work increasingly carries political valence. Resisting the pressure is among the most important professional disciplines the field must develop, and the institutional design must explicitly support the resistance.

6. The Reflexive Program

The field requires a built-in reflexive program to examine its own attention patterns. The program operates through established procedures rather than depending on the goodwill of individual scholars, and the procedures should be specified in the field’s foundational documents rather than developed in response to subsequent controversies.

The reflexive program has several specific components.

The first is the periodic audit of the field’s own research portfolio. The audits should be conducted on a defined schedule — perhaps every five years — by scholars not directly involved in the work being audited, and the audits should examine what the field has and has not addressed during the period under review. The findings should be published in the field’s outlets and should inform the discussions of the field’s priorities going forward. The audits should specifically attend to potential blind spots: areas the field would be expected to address but has not, perspectives the field has not adequately included, and patterns in the field’s outputs that suggest implicit priorities the field’s scholars have not endorsed explicitly.

The second is the explicit attention to neglect across the political spectrum. The discussion in section 5 above identified the methodological requirement, and the reflexive program should include the procedural mechanisms that maintain the requirement in practice. The audit findings should report on the political distribution of the cases the field has addressed; the editorial decisions should be reviewed periodically to ensure that they do not show patterns of preference for particular kinds of cases; and the recruitment and retention of scholars should be examined to ensure that the field’s professional community remains diverse in the commitments its members bring.

The third is the assessment of the field’s engagement relationships and their effects on the field’s research portfolio. The engagement with research-governing institutions, discussed in Paper 7, creates pressures that can shift the field’s emphases in ways the field’s scholars would not endorse on reflection. The reflexive program should examine whether the engagement has produced such shifts, whether the engagement partners’ priorities are appropriately represented in the field’s work, and whether the structural commitments that protect the field’s analytical independence are being maintained in practice. The assessment should be conducted by scholars who are not themselves heavily involved in the engagement relationships, and the findings should be published in the field’s outlets.

The fourth is the periodic review of the field’s methodological standards and the tiered evidence system. The standards introduced in Paper 2 will require revision as the field’s experience accumulates, and the revision should be conducted through deliberate processes rather than through informal drift. The review should examine whether the standards have been applied consistently across the field’s outputs, whether the tiered system has functioned as intended, and whether the methodological developments in adjacent fields require the field’s standards to be updated. The review should be conducted by the field’s professional community through transparent procedures, and the revised standards should be documented explicitly.

The fifth is the examination of the field’s own internal patterns of attention. A field that studies which questions are addressed and which are neglected in other research areas must apply the same scrutiny to its own work. The examination should identify the substantive areas, methodological approaches, and types of cases that the field has emphasized and those it has not, and should examine whether the patterns reflect deliberate priorities or implicit ones that the field’s scholars would not endorse on reflection. The examination should be conducted with the same methodological seriousness the field applies to other fields, and the findings should inform the field’s priorities going forward.

The reflexive program is uncomfortable in practice because it requires the field’s scholars to apply scrutiny to their own work with the same rigor they apply to others. The discomfort is the point: a field whose central business is the rigorous identification of neglected questions cannot afford to be itself a producer of unfounded claims, of unexamined assumptions, or of the same blind spots it documents elsewhere. The reflexive commitments must be sustained as professional discipline, and the institutional structures must be designed to support the discipline even when sustaining it is uncomfortable.

7. Sunset and Renewal

A healthy neglect-studies enterprise should sometimes work itself out of a job in particular domains. The argument is that the field’s success in specific cases consists precisely in the redistribution of attention that the field has identified as warranted. When a previously neglected area develops its own research community, institutional infrastructure, and sustained attention, the field’s specific work on that area has succeeded, and the work itself becomes less necessary even as it leaves behind a transformed research landscape.

The principle has implications for how the field organizes its specific research programs. A center, project, or fellowship dedicated to a particular case of neglect should be conceived with explicit consideration of what the success conditions would look like and what the appropriate response to success would be. The success conditions should be specified before the work begins, the indicators that the conditions are being met should be tracked during the work, and the response to success should include the redirection of the resources to other neglected cases rather than the indefinite continuation of the specific program.

The principle is uncomfortable institutionally because institutional structures tend to perpetuate themselves once established. A center has staff whose livelihoods depend on the center’s continuation, a project has scholars whose careers are invested in the project’s ongoing work, and the natural pressures favor the institutional persistence even when the original justification has weakened. The field’s institutional design must include explicit mechanisms for resisting these pressures, with sunset clauses in specific programs, with periodic external reviews that consider whether continuation is warranted, and with cultural norms that celebrate the cases in which the field’s specific work has succeeded and is no longer needed.

The corresponding principle is renewal. The cases of neglect are not static; new cases emerge as research areas develop, as new methodologies become available, as new constituencies organize, and as the broader research system changes. The field’s institutional structures should include explicit mechanisms for identifying emerging cases and for redirecting attention toward them, with the renewal proceeding alongside the sunset of specific programs whose work has succeeded.

The sunset-and-renewal principle applies to the field as a whole as well as to its specific programs. If the field’s broader work succeeds — if the patterns of attention in the research system become more responsive to the structural distortions the field has identified, if the institutional mechanisms for identifying and addressing neglect become routine features of the research system, if the methodological and conceptual contributions the field has made become widely diffused — then the field’s specific institutional structures may become less necessary even as the broader work continues. The possibility should not be feared. A field whose mission is the redistribution of attention should welcome the success that makes its specific work less necessary, even as it continues to address the new cases that emerge as the research landscape evolves.

The recognition does not require the field to plan its own dissolution. The patterns of neglect are persistent features of scholarly inquiry, and the field’s work will remain necessary for as long as scholarly inquiry continues to allocate attention in the structurally distorted ways that the preceding papers have documented. The recognition is rather that the field’s success is measured not by its own growth and persistence but by the changes in the broader research system that the field’s work has helped to produce. The orientation toward external success rather than internal preservation is among the most important cultural commitments the field must maintain.

8. Long-Term Success Criteria

The discussion of sunset and renewal points toward the question of how the field’s long-term success should be measured. The answer is that the measure should be the redistribution of attention in the wider scholarly ecosystem, not the growth of the field itself.

The specific indicators of success include several. The first is the reduction in patterns of structural neglect that the field has documented. The patterns identified in particular cases should diminish over time if the field’s interventions have been effective, and the diminution should be measurable through the same bibliometric and analytical tools that identified the original patterns. The measurement requires the long-term data infrastructure discussed in Paper 5 and the sustained analytical work that the workforce of Paper 6 will conduct.

The second is the incorporation of the field’s methods and concepts into the routine practice of research-governing institutions. The funding agencies that adopt portfolio review as a regular practice, the learned societies that commission stocktaking reviews of their own disciplines, the universities that revise their tenure criteria to recognize the kinds of contributions the field’s scholars make, and the international organizations that incorporate attention to neglect into their standard practices all represent forms of success that go beyond the field’s own work. The success consists in the field’s contributions becoming part of how the research system works rather than remaining specific to the field’s own activities.

The third is the broader cultural shift in how the research system understands itself. The recognition that attention is allocated rather than distributed, that the allocation mechanisms produce structural distortions, and that the distortions are appropriate subjects of scholarly study and corrective intervention all represent cultural shifts that the field’s work can contribute to even when the contributions cannot be traced to specific outputs. The cultural shift is harder to measure than the specific indicators above, but it is the deepest form of success the field can achieve, and the indicators of it can be tracked through the changing terms in which the research system discusses its own priorities.

The success criteria are demanding, and they extend on timelines that exceed any individual scholar’s career. The field’s founding scholars will not see the full measure of the success they have contributed to, and the patience required to sustain work whose results extend across generations is among the cultural commitments the field must maintain. The patience is not resignation; it is the recognition that the work the field undertakes is of a scale that requires sustained effort over long periods, and that the contributions of any individual scholar or any individual cohort are valuable contributions to a larger project rather than self-contained achievements.

The implication for the field’s evaluation of its own progress is that the standard measures of scholarly success — citations, publications, grants, prestige — are partial indicators rather than ultimate measures. The standard measures matter for the practical reasons that any field’s standing depends on them, and the field must perform adequately by these measures to maintain the institutional infrastructure that supports its work. But the standard measures do not capture the field’s deepest contributions, and the field’s professional culture should keep the broader success criteria in view rather than allowing the standard measures to become ends in themselves.

9. Synthesis of the Series

The preface to this series argued that the distribution of scholarly attention bears a complicated and often weak relationship to the distribution of scholarly importance, and that the institutional and intellectual scaffolding needed to study this phenomenon systematically has not yet been built. The seven papers that followed have specified what the scaffolding would look like: the conceptual framework that defines what the field studies, the methodological standards that distinguish rigorous identification of neglect from impressionistic claims, the academic infrastructure that hosts the work, the funding strategy that sustains it, the data resources that enable it, the workforce that conducts it, and the engagement with research-governing institutions through which the work reaches the bodies that can act on it.

This final paper has addressed the commitments that hold the whole structure together: the field’s name and public identity, its distinctions from adjacent enterprises that operate by different standards, its internal pluralism, its reflexive examination of its own attention patterns, its orientation toward eventual success rather than indefinite self-perpetuation, and its broader success criteria measured by changes in the wider research system rather than by the field’s own growth.

The series has consistently emphasized the interdependencies among its elements. The methodological standards cannot be applied without the institutional infrastructure that hosts them; the institutional infrastructure cannot be sustained without the funding strategy that supports it; the funding strategy cannot be implemented without the workforce that engages with the funding partners; the engagement with funding partners cannot be conducted without the analytical independence that the reflexive commitments protect. The field cannot be built piecemeal, with elements added as resources allow and the others deferred indefinitely. The elements support each other, and the coherent building of the whole requires attention to all of them from the outset, even when their development proceeds on different timelines.

The series has also emphasized the long timelines on which the field’s work proceeds. The founding centers can be established within a few years; the journal can be launched within five; the workforce that gives the field a sustained scholarly community emerges over a decade; the broader changes in the research system that constitute the field’s deepest success extend across generations. The patience required to sustain the work over these timelines is among the cultural commitments the founding scholars must maintain, and the institutional structures must be designed to support the patience even when the temptations toward shorter-horizon thinking are persistent.

The series has been explicit about the risks the field faces. The risk of becoming a vehicle for grievance scholarship, of being captured by the institutions it engages with, of developing the same blind spots it documents elsewhere, of growing too quickly to maintain its standards or too slowly to sustain its infrastructure, of being co-opted by political actors whose purposes the field’s analytical commitments do not support — all of these have been identified, and the structural commitments that protect against them have been specified. The protections are not guarantees; they are professional disciplines that the field’s scholars must maintain through ongoing effort, and the maintenance is among the most important professional commitments the field requires.

The series has been deliberate in its argumentative structure, with each paper building on the preceding ones and the conclusions of each paper feeding into the next. The structure reflects the actual interdependencies of the field’s institutional infrastructure rather than a rhetorical convenience. The founding scholars who read the series should expect to encounter the same interdependencies in practice, and the founding work should be planned with the recognition that the elements must be developed in coordination rather than in sequence.

10. Next Steps

The series has been a design document rather than an implementation plan, and the conclusion of the series should specify the next steps that the founding scholars should consider.

The first step is the convening of a founding committee. The committee should include senior scholars from the disciplines on which the field draws — agnotology, metascience, science and technology studies, the history and philosophy of science, library and information science, priority-setting research, and the research-waste literature — together with representatives from the constituencies the field will serve: funding agencies that have expressed interest in the field’s work, foundations whose missions align with the field, and practitioners who can speak to the public’s interests in the field’s outputs. The committee’s task should be to review the series, to refine the proposals in light of the committee’s collective expertise, and to develop the specific implementation plans that the proposals require.

The second step is the identification of the institutions that will host the founding centers. The criteria for selection were discussed in Paper 3, and the committee should apply the criteria to identify the candidate institutions whose senior scholars, institutional environments, and funding arrangements make them the most plausible hosts. The selection should be deliberate rather than first-come, and the institutions selected should be expected to commit to the long-term institutional support that the founding centers will require.

The third step is the establishment of the funding partnerships that will support the founding work. The strategy outlined in Paper 4 emphasized philanthropic funding as the likely first mover, and the committee should engage with the foundations whose missions align with the field to develop the specific funding proposals that the founding work requires. The engagement should be conducted with attention to the diversification that the funding strategy emphasized, with multiple funders cultivated from the outset rather than reliance on any single funder for the early support.

The fourth step is the launch of the founding centers, with the institutional infrastructure, the funding, and the senior scholarly leadership in place. The launches should be coordinated across the centers rather than proceeding independently, with explicit attention to the complementarity among the centers and to the cross-institutional connections that will support the field’s broader community.

The fifth step is the development of the flagship journal, the registry, and the other elements of the publication infrastructure that Paper 3 specified. The development should proceed in coordination with the founding centers, with the editorial leadership drawn from the centers’ senior scholars and the operational support hosted by one of the centers.

The sixth step is the longer-term work that the preceding papers have specified: the doctoral training, the workforce development, the data infrastructure, the engagement relationships, and the reflexive program. The work proceeds on the timelines that the relevant papers identified, with the founding committee maintaining oversight of the broader strategy during the period before the field’s professional association can assume the governance functions.

The steps are demanding, and the founding scholars who undertake them are committing to long-term work whose results extend beyond their own careers. The commitment is justified, in this paper’s view, by the contributions the field can make to the broader research enterprise and to the public interests that the research enterprise serves. The justification is a matter for the founding scholars to assess for themselves, and the series has been offered as a contribution to that assessment rather than as a settled brief.

The preface to this series concluded with an invitation: that readers who find the case persuasive engage with the subsequent papers, and that readers who find it unpersuasive articulate the grounds of their disagreement. The conclusion of the series renews the invitation. The field that the series has proposed will be built only if scholars who find the case persuasive undertake the work, and the work will be more rigorous if it proceeds in conversation with scholars whose perspectives differ from the founders’. The conversation should continue, and the series should be understood as one contribution to it rather than as the last word.


Notes

[^1]: The naming question for emerging interdisciplinary fields has been examined in several contexts; the discussion in Klein (1990) of how field names shape disciplinary identities provides useful background, and the case studies in Frodeman, Klein, and Pacheco (2017) include several relevant examples.

[^2]: The literature on public engagement in science is substantial; the standards-of-practice work developed by the National Co-ordinating Centre for Public Engagement in the U.K. and parallel bodies elsewhere provides operational guidance, and the scholarly literature on the role of public engagement in research is reviewed in Stilgoe, Lock, and Wilsdon (2014).

[^3]: The distinction between rigorous identification of neglect and grievance scholarship draws on the broader literature on scholarly standards in fields with applied dimensions; Lamont (2009), cited in Paper 2, addresses adjacent questions in the context of peer review.

[^4]: The literature on contrarianism in scholarly contexts is partial but includes useful treatments in the philosophy of science; the discussion in Boudry, Blancke, and Pigliucci (2015) of the distinction between productive heterodoxy and unproductive contrarianism provides relevant analysis.

[^5]: The agnotology literature, cited extensively in earlier papers, is the primary scholarly source for the analysis of strategic ignorance; Proctor and Schiebinger (2008) and the case studies in Proctor (2011) and Oreskes and Conway (2010) provide the foundational material.

[^6]: The reflexivity literature in science studies is large and includes both methodological treatments and substantive applications; Woolgar (1988) provides a foundational statement, and the subsequent literature has developed the application of reflexive methods in many directions.

[^7]: The literature on the dissolution and renewal of research programs is partly historical and partly philosophical; Laudan (1977), cited in Paper 1, addresses the philosophical questions, and the historical literature on specific cases provides the empirical material.

[^8]: The long-term assessment of scholarly fields is discussed in the literature on the sociology of knowledge; the work of Whitley (2000) on the intellectual and social organization of the sciences provides useful conceptual resources, and the more recent literature on field formation and dissolution extends the analysis.


References

Boudry, M., Blancke, S., & Pigliucci, M. (2015). What makes weird beliefs thrive? The epidemiology of pseudoscience. Philosophical Psychology, 28(8), 1177–1198.

Frodeman, R., Klein, J. T., & Pacheco, R. C. S. (Eds.). (2017). The Oxford handbook of interdisciplinarity (2nd ed.). Oxford University Press.

Klein, J. T. (1990). Interdisciplinarity: History, theory, and practice. Wayne State University Press.

Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Harvard University Press.

Laudan, L. (1977). Progress and its problems: Toward a theory of scientific growth. University of California Press.

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Press.

Proctor, R. N. (2011). Golden holocaust: Origins of the cigarette catastrophe and the case for abolition. University of California Press.

Proctor, R. N., & Schiebinger, L. (Eds.). (2008). Agnotology: The making and unmaking of ignorance. Stanford University Press.

Stilgoe, J., Lock, S. J., & Wilsdon, J. (2014). Why should we promote public engagement with science? Public Understanding of Science, 23(1), 4–15.

Whitley, R. (2000). The intellectual and social organization of the sciences (2nd ed.). Oxford University Press.

Woolgar, S. (Ed.). (1988). Knowledge and reflexivity: New frontiers in the sociology of knowledge. Sage.


The series concludes here. The founding scholars who take up the work that the series has proposed will determine whether the field that has been described becomes a reality, and the answer to that question lies in their hands rather than in any further argumentative effort the series could provide.

Posted in Musings | Tagged , , , | Leave a comment

Engagement with Research-Governing Institutions: How Neglect Studies Works With the Bodies Whose Decisions It Studies


Executive Summary

This paper addresses the field’s engagement with the institutions whose decisions shape the distribution of scholarly attention: funding agencies, learned societies, journals, university administrations, government science advisory bodies, and international scientific organizations. The engagement is unavoidable because the field’s applied dimension depends on its findings reaching the bodies that can act on them, and it is fraught because those bodies are the same ones whose attention patterns the field exists to examine. The paper develops a strategy for engagement that allows the field to be useful to research-governing institutions without losing the analytical independence on which its value to those institutions depends.

The paper develops six arguments. The first is that engagement with research-governing institutions is intrinsic to the field’s mission rather than incidental to it, and that the engagement must be planned and supported with the same seriousness as the field’s scholarly work. The second is that funding agencies are the field’s primary interlocutors and that the engagement with them must address several specific functions — portfolio review, priority-setting consultation, embedded analysis — each of which has its own institutional dynamics. The third is that engagement with learned societies and journals operates through different mechanisms and addresses different aspects of the attention problem. The fourth is that engagement with university administrations is necessary for the institutional changes that affect what kinds of scholarship are professionally rewarded, and that this engagement is among the slowest of the field’s applied activities to produce visible results. The fifth is that engagement with government science advisory bodies and parliamentary committees provides the highest-leverage opportunities the field will have, but also carries the highest risks of co-optation and political entanglement. The sixth is that international scientific organizations and cross-border coordination require the field to develop capacities that the early founding years will not fully support, and that the international agenda should accordingly be paced realistically.

The paper concludes with a discussion of the independence problem — the central tension that engagement creates between usefulness to research-governing institutions and analytical credibility about them — and with concrete proposals for the institutional structures that can manage the tension.


1. Introduction

The preceding papers have specified the field’s conceptual framework, methodological standards, academic infrastructure, funding strategy, data resources, and workforce. All of this presupposes that the field’s outputs will reach audiences who can act on them. The audiences include other scholars, who can build on the field’s findings in their own work, and the broader public, which Paper 8 will address. The audiences also include the research-governing institutions whose decisions shape the distribution of scholarly attention, and the engagement with those institutions is the subject of this paper.

The engagement is intrinsic to the field’s mission for three reasons. The first is that the field’s conceptual framework — the identification of cases in which attention has been misallocated — implicitly contains a prescription, since to say that attention has been misallocated is to say that the allocation should be changed. The prescription cannot be acted on without engagement with the institutions that make allocation decisions. The second is that the field’s data depend in substantial part on the cooperation of research-governing institutions, particularly for the analysis of funding decisions, of journal-acceptance patterns, and of institutional-priority decisions. The cooperation is more likely when the institutions see the field as a productive partner rather than as an outside critic. The third is that the field’s professional development requires the recognition of its outputs by the institutions that shape scholarly careers, and the recognition comes through engagement.

The engagement is fraught for the corresponding reasons. The institutions whose decisions the field studies are the same ones whose cooperation the field requires for its data, whose funding the field needs for its work, and whose recognition the field needs for its scholars’ careers. The asymmetry creates pressures toward producing findings that the institutions can accept, toward avoiding criticisms that might damage cooperation, and toward narrowing the field’s work to topics that the institutions find comfortable. The pressures are real, they have damaged the analytical independence of adjacent fields, and the field’s engagement strategy must include explicit structural protections against them.

The argument of this paper is that the tension between engagement and independence cannot be eliminated, but it can be managed. The management depends on institutional structures that maintain the field’s analytical independence as a non-negotiable commitment, on professional norms that reward uncomfortable findings rather than punishing them, and on engagement practices that approach research-governing institutions as collaborators in a shared project rather than as patrons whose preferences must be accommodated. The paper develops the argument through the specific engagement domains and concludes with the institutional structures that the strategy requires.

2. Funding Agencies

Funding agencies are the field’s primary interlocutors among research-governing institutions because their decisions affect the distribution of attention more directly than any other set of decisions in the research system. A scholar can pursue an unfashionable question without institutional support if their resources allow it, but the work that requires substantial funding — empirical projects with real-world data, projects requiring trained research staff, projects requiring specialized infrastructure — depends on funding-agency decisions in ways that scholarly effort alone cannot overcome. The funding agencies’ priorities shape what kinds of work get done, by whom, and at what scale, and the field’s engagement with them is accordingly the highest-leverage applied activity it can pursue.

The engagement with funding agencies serves four specific functions that deserve separate treatment.

2.1 Portfolio Review

The first function is the systematic review of funding portfolios to identify patterns of attention and gaps. A funding agency’s portfolio is the cumulative result of many individual grant decisions, and the patterns that emerge in the portfolio often differ from the priorities the agency would state in its formal documents. The portfolio may concentrate on particular topics, populations, methods, or institutions in ways that nobody explicitly chose, and the concentrations may persist for years without being noticed by the agency’s leadership.

Portfolio review provides funding agencies with information that their internal review processes do not produce. The agencies’ grant-review processes evaluate individual proposals against criteria specific to each program; the portfolio-level analysis examines the cumulative pattern across programs and across years. The two perspectives are complementary, and the portfolio review can identify patterns that no individual grant decision would have produced and that the agency has reason to want to know about.

The field’s contribution to portfolio review combines bibliometric and scientometric methods (Paper 2 and Paper 5) with substantive knowledge of the research areas the portfolio covers. The output is a structured report that documents what the portfolio supports, identifies the patterns of attention that emerge, compares the patterns to relevant external indicators (disease burden, expressed public concern, theoretical centrality), and flags the candidate cases where the patterns appear difficult to justify on substantive grounds. The report does not, in its standard form, recommend specific changes to the portfolio; it provides the information that the agency’s own decision-makers can use to consider whether changes are warranted.

The standard form is important because it preserves the analytical independence the field requires. A portfolio review that ends with specific recommendations becomes an advocacy document, and the boundary between scholarship and advocacy is precisely the one the field must maintain. A portfolio review that ends with information allows the agency to reach its own conclusions, with the field’s scholars available for consultation if the agency wants further analysis but without the field becoming the agency’s policy advocate.

The arrangement under which portfolio reviews are conducted requires careful design. The agency must commit to the review’s analytical independence, meaning that the field’s scholars conduct the work according to the field’s methodological standards rather than according to the agency’s preferences. The scholars must commit to constructive engagement with the agency, meaning that the work is conducted in conversation with the agency’s staff rather than as an external audit. The output must be available to both the agency and the field’s scholarly community, with the agency receiving the report first but with publication following on an agreed timeline. The structure has been used successfully in adjacent contexts and provides a workable model.[^1]

2.2 Priority-Setting Consultation

The second function is consultation in the priority-setting processes by which funding agencies establish their research priorities. The processes vary across agencies, but they typically include some combination of advisory committees, public consultations, strategic planning exercises, and program-specific reviews. The field’s contribution to these processes draws on the priority-setting methodology developed by the James Lind Alliance and adapted by others (Papers 2 and 4), with the contribution focused on the structured identification of neglected questions that the priority-setting process can consider.

The consultation work differs from portfolio review in being prospective rather than retrospective. The portfolio review examines the cumulative results of past decisions; the priority-setting consultation provides input into future decisions. The two are complementary, and the field’s engagement with funding agencies should typically include both.

The consultation work requires the field’s scholars to participate in advisory processes whose pace and structure differ from those of conventional scholarship. The processes operate on the timelines that the agency requires, with deliverables that the agency specifies, and with constraints on confidentiality and disclosure that the agency imposes. The constraints are typically compatible with the field’s analytical standards, but the field’s scholars must be prepared to work within them, and the workforce strategy of Paper 6 must include training in the skills the engagement requires.

A specific consideration is the field’s relationship to the constituencies whose voices the priority-setting processes are intended to include. The James Lind Alliance methodology and similar approaches emphasize the inclusion of patients, clinicians, and affected communities in research-priority decisions, and the field’s contribution to priority-setting should support rather than substitute for those voices. The field’s scholars provide methodological expertise and substantive analysis that complement the lived-experience contributions of affected communities; they should not present themselves as speaking for the communities whose participation the process is designed to include.

2.3 Embedded Analysis

The third function is embedded analysis, in which a scholar trained in the field’s methods is hosted by a funding agency for an extended period — typically one to two years — to conduct analyses that the agency’s own staff is not positioned to conduct. The embedding provides the field’s scholars with access to data and processes that external researchers cannot easily obtain, and provides the agency with analytical capacity that its own staffing structure does not support.

Embedded analysis has been used in several adjacent contexts, with mixed results that depend on the design of the arrangement. The arrangements that have worked well share several features. The scholar’s institutional affiliation remains with the home university or research center rather than transferring to the agency, which preserves the scholar’s analytical independence. The agency commits to providing access to the data and processes the work requires, with the access governed by explicit agreements that protect both the analytical work and the agency’s legitimate confidentiality interests. The scholar’s outputs are published in the field’s scholarly outlets on an agreed timeline, with the agency receiving early access but not vetoing publication. The scholar’s reentry into the academic world after the embedding is supported by arrangements that preserve the scholar’s career trajectory, since embedding work is sometimes undervalued in conventional tenure evaluations.

The arrangements that have worked poorly typically failed on one or more of these features. The scholar’s affiliation transferred to the agency, with the scholar’s outputs becoming agency documents rather than scholarly publications. The agency restricted publication on terms the scholar accepted under pressure but later regretted. The scholar’s career suffered because the embedding work was not recognized appropriately by the home institution. The lessons from the unsuccessful arrangements are documented and can inform the field’s embedded-analysis program.[^2]

The field should pursue embedded-analysis arrangements with several funding agencies as a priority engagement activity. The arrangements provide the field with the data access required for substantial work, provide the agencies with analytical capacity they value, and produce the kinds of scholarly outputs that build the field’s track record. The arrangements should be initiated through formal agreements rather than individual relationships, so that they survive changes in agency leadership and changes in the scholar’s career.

2.4 Foundation Engagement

The fourth function is engagement with private foundations as distinct from public funding agencies. The distinction matters because foundations operate with different constraints, different timelines, and different decision-making structures, and the engagement strategy must accommodate the differences.

Foundations are typically more accessible than public agencies, with shorter decision cycles, more flexible engagement structures, and program officers who have substantial discretion over funding decisions within their portfolios. The accessibility makes foundations natural early partners for the field’s engagement work, and the funding strategy of Paper 4 already identified them as the likely first movers in the field’s funding pipeline. The engagement strategy should build on the funding relationships to develop the analytical engagement that the field can offer foundations on the same terms it offers public agencies.

Foundations also have specific limitations. Their resources are smaller than those of major public agencies, which limits the scale of the analytical work they can support. Their portfolios are more specialized, which limits the breadth of the analyses that are relevant to any individual foundation. Their priorities can shift more rapidly than public agencies’ priorities, which creates challenges for sustained engagement. The engagement strategy should attend to these limitations and should pursue foundation engagement with realistic expectations about what individual relationships can accomplish.

3. Learned Societies and Journals

Learned societies and journals are the second category of research-governing institutions with which the field must engage. Their decisions affect attention patterns through different mechanisms than funding-agency decisions: by determining what is published, what is presented at conferences, what is recognized through awards and honors, and what defines the working priorities of the disciplines they serve.

The engagement with learned societies serves several specific functions.

The first is the commissioning of stocktaking reviews of disciplines’ own neglected territory. The stocktaking review series introduced in Paper 3 provides a structured venue for such reviews, and the engagement with learned societies should include encouraging them to commission stocktakings of their own fields’ attention patterns. The commissioning gives the societies ownership of the reviews and ensures that the work is conducted with the substantive depth that disciplinary expertise allows. The societies’ membership becomes an audience for the reviews, with the resulting discussions taking place in the venues where disciplinary priorities are actually set.

The second is engagement with journal editorial policies. Journals shape attention patterns through their editorial decisions about what kinds of work they will publish, what methodological standards they apply, what evaluative criteria they use, and what topics they treat as within or outside their scope. The decisions accumulate over time into the patterns that constitute disciplinary literatures, and changes in editorial policy can shift the patterns in ways that individual scholarly effort cannot. The field’s engagement with journals should include specific proposals for editorial changes that would address documented patterns of neglect — special issues commissioned on neglected topics, registered reports that reduce the disadvantages faced by exploratory work, training of reviewers in recognizing the value of work on neglected questions, and editorial standards that explicitly accommodate methodological pluralism.

The third is engagement with peer-review practice. The peer-review literature (cited in earlier papers) has documented the mechanisms by which conventional peer review produces conservatism, and several adaptations have been developed to address the mechanisms. The field’s engagement with journals and conferences should include advocacy for the adaptations that are well-supported by evidence: registered reports for confirmatory work, open peer review where appropriate, structured rubrics that reduce reviewer bias toward established questions, and explicit consideration of whether submissions address neglected areas in evaluating their contribution.

The fourth is engagement with recognition structures. Learned societies typically administer awards, prizes, and honorific recognition that shape disciplinary career trajectories. The recognition structures tend to reinforce existing patterns of attention by rewarding work on established questions, and the engagement with learned societies should include specific proposals for recognition structures that reward work on neglected questions. The proposals can take the form of new awards, of explicit criteria for existing awards that recognize the value of work on neglected questions, or of nominations for existing awards of scholars whose work on neglected questions would otherwise be overlooked.

The engagement with learned societies and journals is slower than engagement with funding agencies because the relevant decisions are more distributed and the mechanisms for change are more diffuse. The engagement should be sustained over the long term rather than expected to produce rapid results, and the field should measure progress by the cumulative changes in editorial practice and disciplinary norms rather than by any individual decision.

4. University Administrations

University administrations are the third category of research-governing institutions, and their decisions affect attention patterns through the structures that determine what scholarship is professionally rewarded. The decisions include tenure and promotion criteria, faculty hiring priorities, departmental and center funding, teaching loads and assignments, and the broader institutional climate that affects what kinds of work scholars feel free to pursue.

The engagement with university administrations is among the slowest of the field’s applied activities to produce visible results, because the relevant decisions operate on long timescales and affect attention patterns through indirect mechanisms. A change in tenure criteria that explicitly recognizes interdisciplinary work shifts the career incentives for current doctoral students, who will publish their first books a decade later, who will receive tenure five years after that, and who will eventually serve on tenure committees that apply the revised criteria to subsequent generations. The timeline is generational, and the engagement strategy must be paced accordingly.

The engagement with university administrations serves several functions.

The first is advocacy for tenure and promotion criteria that recognize the field’s distinctive contributions. The criteria that work poorly for the field’s scholars were discussed in Paper 6, and the engagement with university administrations should include specific proposals for revised criteria that recognize the value of interdisciplinary work, of work in newly established outlets, of methodological innovation, and of applied engagement with research-governing institutions. The proposals are most effective when they draw on the precedents established in adjacent interdisciplinary fields, since the administrations are more receptive to proposals that have been implemented elsewhere than to proposals that require them to be pioneers.

The second is advocacy for faculty appointments in the field. The appointments at the founding centers (Paper 3) are essential foundations, but the broader workforce that the field requires depends on appointments at universities that do not host founding centers. The engagement with these universities should include specific proposals for appointments — joint appointments between relevant disciplines and metascience programs, dedicated lines in newly established programs, appointment criteria that explicitly accommodate the field’s distinctive profile — and should provide the universities with the information they need to evaluate the proposals.

The third is engagement with the structures that determine doctoral training. The doctoral programs that produce the field’s workforce operate under university structures that affect what students can be admitted, what they can be taught, and what dissertations are accepted. The engagement with university administrations should include advocacy for the structures that support the field’s training pathways — concentrations within existing programs, joint programs across departments, dedicated programs at institutions where the founding investment can be justified.

The fourth is engagement with the broader institutional climate that affects what kinds of work scholars feel free to pursue. The climate is shaped by many factors: the administrators’ rhetoric about what the university values, the funding decisions that signal institutional priorities, the appointment patterns that shape the senior faculty’s composition, and the disciplinary norms that operate within departments. The engagement with administrations should include attention to all of these factors, with the recognition that no single intervention will transform the climate but that sustained attention across multiple dimensions can shift it over time.

5. Government Science Advisory Bodies

Government science advisory bodies and parliamentary committees represent the highest-leverage engagement opportunities the field will have. The bodies advise governments on research-policy decisions, on the allocation of public research funding, on the structure of the research system, and on the broader policies that shape what kinds of scholarship are conducted in their jurisdictions. The decisions that emerge from this engagement affect attention patterns across entire national research systems, and the leverage corresponds to the breadth of effect.

The engagement also carries the highest risks. Government bodies operate in political environments that pose risks of co-optation, of political entanglement, and of the field’s work being used in ways that compromise its analytical independence. The engagement strategy must accordingly include explicit protections against the specific risks that government engagement creates.

The advisory bodies relevant to the field include those that operate at the level of national governments, those that operate at the level of regional or sub-national governments, and those that operate within particular government departments responsible for research policy. The structures vary across jurisdictions, but several engagement functions are common across them.

The first is participation in advisory processes that bear on research-policy decisions. The participation can take the form of formal membership on advisory committees, of submissions to consultations, of presentations to relevant committees, and of informal engagement with the staff who support the advisory processes. The participation should be conducted by the field’s senior scholars, since the advisory work requires both substantive depth and the institutional standing that gives the contributions weight.

The second is the production of research-policy reports commissioned by government bodies. The reports typically address specific questions on which the commissioning body has decided it wants analytical input, and they provide the field’s scholars with opportunities to apply the field’s methods to questions of direct policy relevance. The commissioning relationships should be governed by terms that protect the analytical independence of the work, with the commissioning body specifying the questions but not the answers.

The third is engagement with parliamentary or legislative committees on questions bearing on research policy. The engagement can include formal testimony, the provision of background briefings, and the development of relationships with committee staff who maintain institutional memory across the political cycles that affect elected officials. The engagement requires the field’s scholars to be prepared to work in environments where the audiences are not scholars and where the standards of communication are different from those of conventional scholarly publication.

The risks of government engagement deserve specific treatment. The first risk is political polarization: in jurisdictions where research policy has become a contested political issue, the field’s engagement can be drawn into the political contestation in ways that compromise its analytical credibility. The corrective is to maintain the field’s commitment to evenhanded analysis that does not align predictably with any political position, and to resist the pressure to use the field’s work to support political positions that the field’s scholars happen to favor. The pressure is real, particularly for scholars whose personal commitments align with one political position more than another, and the professional norms must explicitly support the resistance.

The second risk is the use of the field’s work by political actors in ways the field’s scholars did not intend. Reports commissioned for one purpose are sometimes cited in service of arguments their authors did not endorse, and the field’s scholars must be prepared for this and must develop the institutional practices that allow them to clarify the limits of their work without losing the engagement that the work is intended to support. The clarification can take the form of explicit statements about what the work supports and does not support, of follow-up communications when the work is used inappropriately, and of public statements when the misuse is sufficiently serious to require correction.

The third risk is institutional capture: the gradual alignment of the field’s work with the priorities of the government bodies it engages with, to the point that the field becomes a service provider to those bodies rather than an independent analytical voice. The corrective is the diversification of the field’s engagements, the maintenance of analytical independence as a non-negotiable commitment, and the cultural maintenance that Paper 4 identified as the deepest of the field’s institutional challenges.

6. International Scientific Organizations

The international dimension of the field’s engagement is the slowest of its applied activities to develop, because the international structures operate on longer timelines, with more diffuse decision-making, and with greater coordination requirements than the national-level engagements discussed above. The international agenda should accordingly be paced realistically, with the expectation that substantial international engagement will develop only after the field has consolidated at the national level in at least several jurisdictions.

The relevant international organizations include UNESCO and its science-related programs, the International Science Council and its disciplinary unions, the OECD’s working groups on research policy, and the various regional bodies that coordinate research policy within particular geographic areas. The engagement with these organizations serves several functions.

The first is the development of international comparative analyses that no single national engagement could produce. The field’s data infrastructure (Paper 5) supports comparative work across national research systems, and the international organizations provide both venues for presenting comparative findings and partners for the comparative work itself. The comparative analyses are particularly valuable for the field because they identify which patterns of attention are general features of scholarly inquiry and which are contingent on particular national arrangements (Paper 2).

The second is the development of international standards and norms that the field’s work can contribute to. The standards and norms include those for research integrity, for open science, for research-priority-setting, and for the broader practices that shape attention patterns across national systems. The field’s contribution should focus on the specific elements where neglect-studies expertise has distinctive value, rather than attempting to address the broader standards comprehensively.

The third is the support of neglect-studies development in jurisdictions where the field has not yet established itself. The international engagement provides the field’s scholars with opportunities to support colleagues in jurisdictions whose research-policy environments are less hospitable to the field’s work, to share the methodological and institutional resources the founding centers have developed, and to contribute to the broader internationalization of the field. The support should be offered on terms that respect the autonomy of scholars in different jurisdictions, recognizing that the patterns of attention vary across systems and that the field’s specific findings in one jurisdiction may not translate to others.

The risks of international engagement include all the risks of national-level engagement, with the addition of the specific risks created by the institutional complexity of international organizations. The organizations operate with constraints that the field’s scholars may not fully understand, with political dynamics that differ from those of national bodies, and with timelines that can be substantially longer than scholars accustomed to national engagement might expect. The engagement should be conducted with explicit awareness of these features and with realistic expectations about the pace and the products of the work.

7. The Independence Problem

The central tension that engagement creates — between usefulness to research-governing institutions and analytical credibility about them — runs through all the engagement domains discussed above. This section addresses the institutional structures that can manage the tension.

The first structural commitment is the diversification of the field’s relationships with research-governing institutions. A field whose engagement is concentrated with a single funder, a single agency, or a single set of relationships is vulnerable to the priorities of its dominant partner in ways that compromise its independence. The diversification across funders (Paper 4) and across engagement relationships supports the field’s capacity to refuse engagement on terms that would compromise its work, since the refusal does not threaten the field’s survival as it might if the relationship were the field’s only source of support.

The second commitment is the maintenance of analytical independence as a non-negotiable element of all engagement relationships. The relationships should be governed by explicit terms that protect the independence: the field’s scholars conduct work according to the field’s methodological standards, the outputs are published in the field’s scholarly outlets, and the engagement partners receive analytical input rather than predetermined conclusions. The terms should be documented in formal agreements rather than relying on informal understandings, since informal understandings tend to drift under sustained pressure in ways that documented agreements do not.

The third commitment is the cultural maintenance of professional norms that reward uncomfortable findings rather than punishing them. The norms operate through the field’s professional community: through the journal’s editorial practice, through the conference’s program decisions, through the registry’s quality standards, and through the field’s informal recognition structures. The norms must specifically reward the production of findings that engagement partners would prefer not to be published, with the recognition that such findings are the strongest evidence of the field’s analytical credibility. The norms must also protect scholars whose findings have produced friction with engagement partners, since the friction can otherwise impose career costs that discourage the kinds of work the field most needs.

The fourth commitment is the institutional separation of the field’s engagement work from its scholarly work. The separation does not mean that the same scholars cannot do both — many of the field’s most effective scholars will be engaged in both modes — but rather that the relationships and outputs of each mode should be governed by appropriate structures. The engagement work operates under the terms that engagement partners require; the scholarly work operates under the terms that the field’s professional standards establish. When the two modes generate conflicting requirements — when an engagement partner asks for findings that the scholarly standards would not support, or when scholarly findings produce friction with an engagement partner — the separation provides the field with explicit choices about how to handle the conflict, rather than blurring the modes in ways that compromise both.

The fifth commitment is the maintenance of the field’s reflexive program, which Paper 8 will develop more fully. A field that examines the patterns of attention in scholarly inquiry must be willing to examine its own attention patterns, including the patterns produced by the field’s engagement relationships. The reflexive work should include periodic audits of how the engagement relationships affect the field’s research portfolio, of whether the engagement has produced shifts in the field’s emphases that the field’s scholars would endorse on reflection, and of whether the field is becoming captured by the institutions it engages with. The audits should be conducted by the field’s professional structures rather than by the engagement partners, and the findings should be published in the field’s scholarly outlets.

The structural commitments cannot eliminate the tension between engagement and independence. The tension is intrinsic to the field’s situation and will require ongoing attention as long as the field continues to engage with research-governing institutions. The commitments can, however, manage the tension in ways that allow the field to be useful to engagement partners without losing the analytical credibility on which its usefulness depends. The management is among the most important professional disciplines the field will need to develop, and the institutional structures that support it must be designed with the recognition that the pressures against them will be persistent.

8. Conclusion

This paper has proposed an engagement strategy for neglect studies that addresses funding agencies, learned societies and journals, university administrations, government science advisory bodies, and international scientific organizations. The strategy combines specific engagement functions for each domain with structural commitments that protect the field’s analytical independence across all of them. The strategy operates on long timelines and depends on the institutional infrastructure that the preceding papers have specified.

The engagement work cannot be conducted without the workforce that Paper 6 discussed, the funding diversification that Paper 4 outlined, the data resources that Paper 5 specified, the academic infrastructure that Paper 3 proposed, the methodological standards that Paper 2 developed, and the conceptual framework that Paper 1 established. The interdependencies among the elements of the field’s institutional infrastructure are the dominant feature of the series, and the engagement paper is the most exposed to those interdependencies because it requires all the other elements to function.

The engagement work is also the most visible to external audiences and the most consequential for the field’s broader influence. The field’s eventual impact on the distribution of scholarly attention depends substantially on whether the engagement succeeds, and the engagement succeeds only if the field maintains the analytical credibility that justifies the engagement in the first place. The circle is virtuous when it works and damaging when it fails, and the structural commitments that this paper has proposed are the field’s best protection against the failure modes.

Paper 8, the final paper in the series, takes up the field’s identity, its public engagement, and the reflexive self-critique that a field of this kind requires. The engagement work that this paper has discussed will be conducted by the scholars whose identity Paper 8 will address, and the reflexive program that Paper 8 will propose will examine the engagement work among its other subjects.


Notes

[^1]: Portfolio review arrangements have been conducted with various funders over the past two decades, often through partnerships between academic researchers and the funders themselves. The most documented examples include several U.S. National Institutes of Health portfolio analyses and the analyses commissioned by the Wellcome Trust and the U.K. Medical Research Council. The arrangements vary in their specific terms but share the general structure described in section 2.1.

[^2]: The literature on embedded researchers in policy organizations is reviewed in Cherney and Head (2010) and in subsequent work. The findings on what makes embedded arrangements succeed or fail are consistent across multiple national contexts and policy domains.

[^3]: The engagement of learned societies with research-priority questions has been documented in several disciplinary cases; the work of the American Association for the Advancement of Science on cross-disciplinary research priorities and the parallel work of European disciplinary unions provides relevant examples.

[^4]: The literature on tenure and promotion criteria in interdisciplinary contexts, cited in Paper 6, develops further in the work on university administrative responses to interdisciplinary fields. Klein (2010) provides a useful treatment of the institutional dimensions, and Pfirman and Martin (2010) addresses the specific question of how interdisciplinary scholars are evaluated.

[^5]: The literature on government science advisory bodies is reviewed in Doubleday and Wilsdon (2013) and in subsequent work on the changing role of scientific advice in policy contexts. The risks of political entanglement that the literature documents are directly relevant to the engagement strategy outlined in section 5.

[^6]: The international scientific organizations have substantial documentation of their own structures and processes, available through their websites and institutional publications. The scholarly literature on the role of international organizations in research policy is partial but includes useful treatments in Crawford, Shinn, and Sörlin (1993) and in subsequent work on the internationalization of scientific governance.


References

Cherney, A., & Head, B. (2010). Evidence-based policy and practice: Key challenges for improvement. Australian Journal of Social Issues, 45(4), 509–526.

Crawford, E., Shinn, T., & Sörlin, S. (Eds.). (1993). Denationalizing science: The contexts of international scientific practice. Kluwer Academic.

Doubleday, R., & Wilsdon, J. (Eds.). (2013). Future directions for scientific advice in Whitehall. Centre for Science and Policy, University of Cambridge.

Klein, J. T. (2010). Creating interdisciplinary campus cultures: A model for strength and sustainability. Jossey-Bass.

Pfirman, S., & Martin, P. (2010). Facilitating interdisciplinary scholars. In R. Frodeman, J. T. Klein, & C. Mitcham (Eds.), The Oxford handbook of interdisciplinarity (pp. 387–403). Oxford University Press.


Posted in Musings | Tagged , , , , | Leave a comment

Workforce and Training Pipeline: Building the Scholarly Community for Neglect Studies


Executive Summary

This paper addresses the workforce that neglect studies will require to function as a mature field. The premise is that institutional infrastructure (Paper 3), funding (Paper 4), and data resources (Paper 5) are necessary conditions for the field but not sufficient ones; the work must be done by scholars whose training, career incentives, and professional networks make sustained engagement with the field possible. Building that workforce is a longer-term project than the other infrastructure work, since the training cycles measure in years rather than months and the career structures take a decade or more to mature.

The paper develops six arguments. The first is that the skill profile of a neglect-studies scholar is unusually demanding, combining substantive grounding in at least one host discipline with methodological breadth across the domains identified in Paper 2 and with the capacity to engage with research-governing institutions. The second is that the career risk profile of work on neglected questions is substantial and that the field’s workforce strategy must include explicit mechanisms for protecting early-career scholars from the professional costs that the work otherwise imposes. The third is that the field requires multiple training pathways — doctoral, master’s, continuing-education, and informal — each serving different constituencies and producing different scholarly contributions. The fourth is that mentorship structures must be designed deliberately, since the natural mentorship pathways within established disciplines do not adequately serve scholars whose work crosses disciplinary boundaries. The fifth is that the inclusion of practitioners and lived-experience contributors is intrinsic to the field’s mission rather than incidental to it, and that the workforce strategy must accordingly include structures that allow these contributors to participate as full members of the scholarly community. The sixth is that the field’s professional credentialing decisions — whether to develop formal certification, what professional association structure to build, how to define professional standards — must be made carefully and with attention to the trade-offs between consolidation and inclusiveness.

The paper concludes with a discussion of the timeline for workforce development and the specific risks the field must anticipate as the workforce matures.


1. Introduction

The institutional infrastructure outlined in the preceding papers requires people to build, sustain, and use it. The data dashboard requires scholars who can adapt bibliometric tools for neglect-mapping purposes; the registry requires editors who can evaluate evidence-tier claims; the journal requires reviewers who can assess submissions across multiple methodological domains; the abandoned-programs archive requires historians who can conduct recovery work; the funding strategy requires scholars who can engage credibly with foundation program officers and agency program staff. None of this can be done by scholars trained in conventional disciplines alone, since the field’s work routinely combines methods and substantive concerns that no single disciplinary training prepares scholars for.

The workforce problem is also the slowest of the field’s founding challenges to address. Buildings can be opened in a year; journals can be launched in two; dashboards can be developed in three. Doctoral training operates on five-to-seven-year cycles, and the scholars who emerge from those cycles need an additional decade to establish themselves as senior figures in the field. The implication is that workforce planning must begin at the field’s founding and must be sustained through the long period before its results become visible.

The argument of this paper is that the workforce strategy must address four interlocking questions. The first is what kinds of scholars the field needs and what their training should look like. The second is how the career risks of work on neglected questions can be managed, since those risks are real and the field cannot ask scholars to accept them without offering structural protections. The third is how the field’s scholarly community should be constituted, with explicit attention to who counts as a member and how the boundaries are drawn. The fourth is what professional infrastructure — associations, credentialing, standards — the field should develop and on what timeline.

The questions are interrelated, and the paper takes them up in turn while attending to the connections among them.

2. The Skill Profile

A neglect-studies scholar working at the standard the field requires must combine three kinds of capacity, each of which is demanding in its own right and the combination of which is more demanding than any one.

The first is substantive grounding in at least one host discipline. The argument is straightforward: claims that a particular question or area is neglected within a discipline depend on knowing the discipline well enough to assess what counts as established work, what the active research community considers central, and what the substantive merits of particular questions are. A scholar without serious disciplinary training cannot make such assessments credibly, and the field’s reputation depends on its scholars being credible to the disciplines whose attention patterns they study.

The grounding required is genuine rather than nominal. A neglect-studies scholar working on biomedical research portfolios must understand biomedical research well enough to engage with biomedical researchers as a colleague rather than as an outside observer. A scholar working on attention patterns in historical scholarship must know enough history to read the relevant literature with full comprehension. The standard is that the scholar’s substantive training would have allowed them to pursue a conventional career in the host discipline, with the neglect-studies work being an additional layer built on that foundation rather than a substitute for it.

The second is methodological breadth across the domains of Paper 2. The triangulation standard the field requires depends on scholars being able to read and assess findings from multiple methodological domains, and to integrate those findings into coherent conclusions. No single scholar will produce primary work in all six of the methodological domains identified in Paper 2, but every scholar should be competent to evaluate work in all of them. The competence must be deep enough to allow critical engagement rather than mere acquaintance: a scholar who cannot identify the limitations of a bibliometric analysis cannot use it appropriately, and a scholar who cannot recognize the methodological choices in an oral-history project cannot integrate its findings with other evidence.

The methodological breadth is unusual among scholars trained in established disciplines, which typically emphasize depth in a smaller range of methods. The implication is that the field’s training programs must include substantial methodological breadth as a core requirement, even though the requirement extends the time-to-completion of doctoral programs and increases the demands on students.

The third is the capacity for engagement with research-governing institutions. The field’s applied dimension (Paper 7 will develop this further) means that its scholars must be able to translate scholarly findings into terms that funders, learned societies, and policy bodies can use, and must be able to engage with those bodies in their own working modes. The capacity is partly methodological — knowing how to produce briefings, how to participate in advisory processes, how to navigate the political and institutional considerations that shape policy decisions — and partly dispositional, in the sense that it requires scholars who are willing to do the engagement work rather than treating it as secondary to publication in journals.

Not every neglect-studies scholar will engage with research-governing institutions at the same intensity, and the field has room for scholars whose work is more purely scholarly as well as for scholars whose work is more applied. But the field as a whole must include scholars in both modes, and the training programs should produce graduates who are at least equipped to engage when their work calls for it.

A specific consequence of the skill profile is that neglect-studies training is unusually long compared to training in single-discipline fields. A doctoral student combining substantive grounding in a host discipline with methodological breadth across the field’s domains and with applied capacity will typically require seven to eight years of training, compared to five to six in conventional single-discipline doctorates. The longer training has costs — to the students, to the institutions, and to the field’s growth rate — but the field cannot relax the requirement without compromising the standards on which its credibility depends.

3. Managing Career Risk

The career risks of work on neglected questions are substantial and well documented. Scholars who pursue questions outside their discipline’s center of gravity face slower publication, weaker citation accumulation, reduced grant success, and longer paths to tenure and promotion compared to scholars working on established questions. The risks are not evenly distributed: they fall more heavily on early-career scholars, on scholars at institutions without strong interdisciplinary support, and on scholars whose work crosses the political or methodological commitments of their host disciplines. The field cannot ask scholars to accept these risks without offering structural protections, and the workforce strategy must accordingly include explicit mechanisms for risk management.

Four kinds of protection deserve specific attention.

3.1 Institutional Commitments

The founding centers identified in Paper 3 should make explicit institutional commitments to the field’s early-career scholars, in the form of tenure-track positions or comparable appointments, multi-year fellowships that provide stable funding through the early-career period, and protected time for the methodological and substantive work that the standard career incentives do not adequately reward. The commitments require resources, which the funding strategy of Paper 4 must support, and they require institutional decisions that establish the field’s positions as genuinely permanent rather than as experimental appointments that can be reversed if priorities shift.

The institutional commitments should include explicit consideration of how the field’s scholars are evaluated for tenure and promotion. The standard criteria — publication in high-impact journals, citation accumulation, grant success — work poorly for scholars whose work is published in newly established outlets, whose citations accumulate more slowly because the field is small, and whose grant success is constrained by the funding mechanisms discussed in Paper 4. The evaluation criteria should be adapted to recognize the kinds of contributions the field’s scholars make, with the adaptation documented explicitly so that scholars can plan their careers around the criteria they will actually be evaluated against. The adaptation is not a special favor to the field; it is a recognition that standard criteria do not measure the contributions of scholars in emerging interdisciplinary fields accurately, and the adaptation has been made for many comparable cases.

3.2 Joint Appointments

Joint appointments between a neglect-studies center and a department in the scholar’s host discipline provide both institutional stability and methodological-substantive integration. The joint appointment allows the scholar to maintain disciplinary connections that support their substantive grounding, to publish in disciplinary outlets that support their tenure case, and to teach within disciplinary programs that maintain their teaching obligations in familiar territory. The structure has been used successfully in adjacent interdisciplinary fields and is well understood by university administrations.

The joint appointment requires careful design. The proportion of the appointment in each unit affects the scholar’s obligations and the units’ expectations, and the proportion should be set with attention to the realities of the scholar’s work rather than to administrative convenience. The tenure case must be reviewed by both units, which requires that the units’ evaluation criteria be compatible and that the scholar’s portfolio satisfies both. The teaching obligations must be distributed sensibly, with the scholar contributing to courses in both units rather than being asked to teach the full load of either. None of these design considerations is unique to neglect studies, and the established practices in adjacent fields provide reasonable models.

3.3 Cross-Institutional Mentorship Networks

Mentorship for neglect-studies scholars must come partly from outside their home institution, because the relevant senior scholars are typically scattered across institutions during the field’s early years. The founding centers should establish explicit cross-institutional mentorship networks, with senior scholars at each center serving as mentors for early-career scholars at other centers and at institutions without dedicated centers. The mentorship should include both substantive guidance on the scholar’s research program and structural guidance on navigating the career challenges specific to the field.

The mentorship networks should be funded explicitly, with travel support for in-person meetings, communication infrastructure for ongoing contact, and recognition of the mentorship work in the senior scholars’ own evaluations. The funding is modest compared to other elements of the field’s infrastructure, but the mentorship work is often unfunded and unrewarded in established disciplines, which limits how much of it can reasonably be expected from senior scholars whose other obligations are substantial.

3.4 Protected Publication Pathways

The flagship journal and companion outlets identified in Paper 3 provide publication pathways that established disciplines do not provide for neglect-studies work. The pathways protect early-career scholars from one of the most direct career costs of working on neglected questions — the difficulty of publishing in outlets that count toward tenure — but the protection works only if the field’s outlets are recognized by the scholars’ institutions and host disciplines as appropriate venues for the work.

The recognition requires deliberate effort. The flagship journal should pursue indexing in the major databases from launch (as Paper 3 specified), should accumulate citation patterns that support its standing, and should publish work that the host disciplines can recognize as serious. The founding centers should advocate within their universities for the recognition of the field’s outlets in tenure and promotion decisions, with documented arguments about the standards the outlets maintain and the scholarly community they represent. The recognition work is slow, but its absence would substantially compromise the protection that the publication pathways are intended to provide.

4. Training Pathways

Paper 3 introduced the training pathways the field requires and the curriculum framework that would govern them. This section develops the workforce implications of those pathways with greater specificity.

4.1 Doctoral Training

Doctoral training is the most consequential of the workforce pathways because doctoral graduates are the scholars whose careers will sustain the field over the long term. The doctoral programs the field establishes — whether as dedicated programs at the founding centers or as concentrations within existing programs in host disciplines — will shape the field for decades through the scholars they produce.

The argument for dedicated doctoral programs in neglect studies, as distinct from concentrations within existing programs, depends on the field’s stage of development. In the first decade, dedicated programs are probably premature: the field is too small to sustain them, the demand for graduates is too uncertain to justify the institutional commitment, and the curricular standards are not yet developed enough to support consistent training across multiple institutions. The recommendation in this period is that doctoral training should be delivered through concentrations within existing doctoral programs in host disciplines, with the concentration providing the field-specific coursework and the host program providing the substantive grounding.

In the second decade, dedicated programs become more plausible, particularly at the founding centers where the institutional commitment has been established and the demand for graduates from funding agencies, universities, and policy bodies has been demonstrated. The transition from concentrations to dedicated programs should be deliberate, with the existing concentrations continuing to operate as alternative pathways even after dedicated programs are established. The pluralism in training pathways serves the field’s broader inclusiveness and accommodates students whose institutional circumstances make concentrations more accessible than dedicated programs.

The funding of doctoral students requires specific attention. Doctoral students in conventional programs are typically funded through teaching assistantships, research assistantships on faculty grants, and institutional fellowships. The first and third of these are available to neglect-studies students through the host disciplines, but the second is constrained by the field’s funding situation (Paper 4) since the grants that would support research assistantships are themselves limited. The implication is that the field’s centers should establish dedicated doctoral fellowships as a priority, both to support students directly and to demonstrate to host institutions that the field has the resources to sustain its training commitments. Dedicated doctoral fellowships are also one of the more tractable targets for philanthropic funding, since the giving is concrete, the recipients are identifiable, and the returns on the investment are visible.

4.2 Master’s-Level Training

Master’s training serves a different function from doctoral training and accordingly should be designed differently. The constituencies for master’s training are scholars who want methodological literacy in the field without retraining as full neglect-studies scholars, practitioners in funding agencies and policy bodies who need to engage with the field’s outputs, and journalists, science communicators, and other professionals who write about research-priority questions.

The master’s curriculum should emphasize methodological breadth and applied capacity rather than substantive grounding, on the assumption that students bring their substantive expertise from prior training. The structure should accommodate part-time enrollment and distance learning, since many of the relevant students will be employed full-time and cannot relocate for residential study. The credential should be recognized in the professional contexts the students will work in, which requires the founding centers to engage with funding agencies, learned societies, and policy bodies to establish the master’s degree as appropriate preparation for the relevant roles.

A specific consideration is the relationship between the master’s program and the doctoral program. Some master’s students will subsequently want to pursue doctoral training, and the master’s curriculum should be designed to support that transition without requiring redundant coursework. The structure should also accommodate the reverse: doctoral students who decide partway through their training that their interests are better served by completing a master’s degree rather than a doctorate. The flexibility serves the students’ interests and supports the field’s broader inclusiveness.

4.3 Continuing Professional Education

Continuing education for established scholars who want to incorporate neglect-studies methods into their work is a third training stream with its own design considerations. The format should be intensive short courses, typically one to two weeks, held at the founding centers or in conjunction with the annual congress. Topics should be modular, so that participants can take individual courses without committing to a full program, and the courses should be designed to support the work of scholars whose primary identification will remain in their host disciplines rather than to convert them to full neglect-studies scholars.

The continuing-education stream serves the field’s longer-term consolidation by extending its methodological reach into adjacent disciplines and by building a network of scholars in other fields who are familiar with the field’s work. The network has practical value in providing informed reviewers for the field’s outlets, collaborators for the field’s projects, and translators of the field’s work to broader audiences. The stream also generates revenue, since participants typically pay for continuing-education courses, and the revenue can support the field’s other training activities.

The certification associated with continuing-education courses should be modest. Course completion certificates appropriate for participants’ continuing-professional-development records are reasonable; a credential that competes with formal master’s or doctoral training is not. The distinction matters because the field’s professional credibility depends on its credentials being earned through serious training rather than through short courses, and any tendency to inflate continuing-education credentials would damage the standing of the longer training pathways.

4.4 Informal Training and Self-Education

A substantial fraction of the field’s eventual workforce will not receive formal training in neglect studies at all. They will be scholars whose primary training is in other disciplines, who encounter the field through their substantive work, and who develop methodological and conceptual literacy through self-education, reading, and engagement with the field’s literature and community. The informal pathway is the dominant one in any emerging field, and neglect studies is no exception.

The field’s responsibilities to informally trained scholars include making the literature accessible (open-access publication, clear writing, documented methods), providing entry points for newcomers (introductory courses, review articles, methodological tutorials), and welcoming informally trained scholars into the professional community on terms that do not require formal credentials. The professional association the field eventually develops should explicitly accommodate informally trained members and should not adopt membership criteria that exclude them.

A specific consideration is the relationship between informally trained scholars and the field’s quality standards. The methodological standards of Paper 2 apply to the field’s work regardless of how the scholars who do the work were trained, and the field’s outlets should evaluate submissions against the standards rather than against the credentials of the authors. The structure preserves the field’s standards while remaining inclusive of scholars whose pathways into the field do not include formal training, and it depends on the field’s outlets being run by editors and reviewers whose evaluations focus on the work rather than on the authors’ credentials.

5. Mentorship and Community Formation

The professional development of scholars in any field depends substantially on mentorship and on the informal networks through which knowledge, norms, and opportunities circulate. The mentorship problem in neglect studies has specific features that the field’s workforce strategy must address.

The first feature is that the senior scholars who would naturally serve as mentors are themselves scarce in the field’s early years. A field that does not yet have many senior figures cannot offer the mentorship density that established fields provide, and the early-career scholars in the field must accept a mentorship environment that is thinner than the one their counterparts in established disciplines enjoy. The corrective is partial: deliberate effort to extend the available mentorship capacity through cross-institutional networks, through inclusion of senior scholars from adjacent fields as mentors for substantive aspects of the work, and through structures that allow early-career scholars to mentor one another in ways that established mentorship cultures do not typically support.

The second feature is that the mentorship the field requires differs from the mentorship typical of single-discipline training. A neglect-studies scholar needs mentorship on substantive questions in their host discipline, on the methodological work that the field’s training emphasizes, and on the career navigation specific to interdisciplinary work in an emerging field. No single mentor is likely to be able to provide all three, and the field’s mentorship structures should be explicit about constructing mentorship teams rather than assigning single mentors.

The third feature is that mentorship in the field must address the career risks discussed in section 3 above. Senior scholars who have navigated the relevant risks themselves are valuable mentors, but they are scarce, and the field’s mentorship structures must be able to draw on senior scholars from adjacent fields whose experience is informative even when not identical. The mentorship conversation should include explicit discussion of risk management, with the senior scholars sharing what they have learned about navigating the career challenges that the early-career scholars face.

The community formation that supports mentorship operates through the field’s conferences, workshops, journals, and informal communication channels. The structures the preceding papers have specified — the annual congress, the specialized workshops, the flagship journal and companion outlets, the registry — all serve community-formation functions in addition to their primary purposes. The workforce strategy should attend explicitly to how these structures support community formation and should design them with that function in view.

A specific consideration is the inclusion of early-career scholars in the field’s governance and editorial structures. Junior representation on the editorial board of the flagship, on the program committee of the annual congress, on the editorial committee of the registry, and on the governance bodies of the founding centers serves both the practical function of bringing fresh perspectives and the developmental function of giving early-career scholars the experience of contributing to the field’s institutional work. The inclusion should be deliberate, with explicit positions reserved for junior scholars and with the positions providing genuine influence rather than nominal participation.

6. Practitioners and Lived-Experience Contributors

The field’s mission includes attention to questions whose neglect bears on communities whose concerns have not been adequately represented in scholarly research. The constituency-less questions category in the Paper 1 taxonomy, and the peripheral inquiries category, both bear on this concern, and the priority-setting partnership tradition that the field draws on has developed explicit structures for including affected communities in research design.

The workforce strategy should accordingly include practitioners and lived-experience contributors as full members of the scholarly community rather than as informants or consultants. The distinction matters. A practitioner who is included as an informant provides information that the scholarly researchers analyze; a practitioner who is included as a full member of the community participates in the design of the research, in the interpretation of findings, and in the production of conclusions. The latter inclusion is more demanding institutionally and more rewarding intellectually, and the field’s structures should support it.

The specific structures that support full inclusion include: practitioner co-investigator roles on research projects, with the practitioners involved from the design stage rather than recruited at the data-collection stage; lived-experience contributor positions on editorial boards and governance bodies, with the positions providing genuine influence; training programs designed to support the participation of practitioners and lived-experience contributors who do not have conventional research training; and authorship practices that recognize the contributions of practitioners and lived-experience contributors to research outputs.

The structures require deliberate design and ongoing maintenance. The default patterns of academic work tend toward the exclusion of non-academic contributors, and the structures that support inclusion must be defended against the gravitational pull toward the default. The defense requires institutional commitment from the founding centers, editorial commitment from the field’s outlets, and professional commitment from the field’s individual scholars to maintain the practices in their own work.

A specific consideration is the compensation of practitioners and lived-experience contributors. Academic work has well-established compensation patterns — salaries, grants, honoraria, publication credits — that do not adequately compensate non-academic contributors for their participation. The field’s structures should include explicit provisions for compensating practitioners and lived-experience contributors at rates that reflect the value of their contributions, with the compensation provided through mechanisms that do not require the contributors to navigate the bureaucratic complexity of academic payment systems.

7. Professional Infrastructure

The professional infrastructure of the field — associations, credentialing, standards — must be developed deliberately, and the decisions about its design have long-term consequences for the field’s character. The discussion below addresses the major questions the field must answer.

7.1 Professional Association

A professional association for neglect studies provides several functions that the founding centers and the journal alone cannot provide. The association maintains the field’s membership records, organizes the annual congress, administers the credentialing structures the field eventually adopts, advocates for the field’s interests in research-policy contexts, and provides the institutional voice that speaks for the field as a whole.

The association should be established once the field has accumulated a critical mass of members, probably in the second half of its first decade. Earlier establishment is premature because the membership base is too small to support the association’s operations; later establishment is risky because the functions the association provides become harder to assemble after the field has developed without them.

The association’s structure should be governed by its members, with elected leadership, term limits, and explicit provisions for inclusion of diverse perspectives within the field. The membership criteria should be inclusive of the various pathways into the field discussed in section 4 above, with formal credentials not being a requirement for membership. The dues structure should accommodate scholars and practitioners with different income levels, with the developing-country and student rates set low enough to allow genuine participation rather than being symbolic concessions.

7.2 Credentialing

The credentialing question — whether the field should develop formal certification for its scholars and on what terms — is more difficult than the association question. The case for credentialing is that it provides quality assurance for the field’s outputs, that it supports the recognition of the field by external institutions, and that it creates clear pathways for career development. The case against is that credentialing tends to consolidate the field around the credentialing body’s preferences, that it raises barriers to entry that exclude informally trained scholars, and that it can become an end in itself rather than a means to the field’s broader purposes.

The recommendation is that formal credentialing should be deferred at least until the field’s second decade and possibly indefinitely. The field’s quality assurance can be provided through the journal’s peer review, the registry’s evidence-tier system, and the professional reputation of individual scholars within the community, without requiring formal certification. The recognition by external institutions can be supported by the documented standards the field maintains in its other infrastructure, without requiring a credential that the institutions can refer to. And the career pathways can be supported by the training programs and mentorship structures without requiring a credential at their endpoint.

If formal credentialing is eventually developed, it should be limited in scope and modest in its claims. A credential that certifies completion of specific methodological training is more defensible than a credential that claims to certify someone as a neglect-studies scholar in general, and the field should be cautious about expanding any credential it adopts beyond its initial scope.

7.3 Professional Standards

Professional standards for the field — what counts as ethical practice, what conflicts of interest must be disclosed, what evidence standards apply to different kinds of claims — should be developed explicitly and documented in a code of practice that the professional association maintains. The standards should draw on the established standards of adjacent fields where they apply, and should add field-specific provisions where the field’s distinctive features require them.

Specific provisions that the field’s standards should address include: the disclosure of funding sources for work that bears on the funders’ own decision-making (a consideration that arises directly from the funding strategy of Paper 4); the standards for representing absent constituencies in work on constituency-less questions (a consideration from Paper 2); the standards for protecting interview subjects whose accounts bear on professional reputations and institutional relationships (a consideration from Paper 5); and the standards for engagement with research-governing institutions (a consideration that Paper 7 will develop further).

The standards should be developed through a consensus process that includes the field’s diverse perspectives, should be reviewed periodically as the field’s experience accumulates, and should be enforced through professional norms rather than through formal disciplinary procedures. The enforcement question is delicate: formal disciplinary procedures require institutional infrastructure that the field will not have for many years, and informal enforcement through professional norms depends on the field’s community being cohesive enough for the norms to operate. The recommendation is to rely on informal enforcement during the field’s early years, with the possibility of more formal procedures considered if the informal mechanisms prove inadequate.

8. Timeline and Risks

The workforce development outlined above operates on a timeline considerably longer than the other infrastructure work the series has discussed. The rough timeline is as follows.

In the first three years, the founding centers should establish the doctoral concentrations within host disciplines, recruit the first cohorts of doctoral students, and begin the cross-institutional mentorship networks. The first master’s programs can launch in this period if institutional support permits, though the second half of the first decade is more realistic.

In years four through seven, the first cohort of doctoral graduates emerges, the master’s programs are operating, and the continuing-education stream is established. The professional association should be founded in this period, and the field’s community begins to recognize itself as a community rather than a collection of individual scholars.

In years eight through twelve, the early-career scholars from the first doctoral cohort move into junior faculty positions, the mentorship density begins to increase as more senior figures emerge, and the question of dedicated doctoral programs becomes pressing. The second-generation scholars trained by the first generation begin to emerge.

In years thirteen and beyond, the field reaches what counts as maturity: established doctoral programs, recognized credentials, a critical mass of senior scholars, and a professional community that operates with the conventions and capacities of established fields. The exact timeline depends on the pace at which the field grows, which depends in turn on the funding situation, the demand for the field’s outputs, and the broader research-policy environment.

The risks the workforce strategy must anticipate include several that deserve explicit treatment. The first is that the field grows too quickly relative to its capacity to maintain quality, producing graduates whose training does not meet the standards the field requires and damaging the credibility of the credentials the field offers. The corrective is to maintain selective admissions to the training programs and to resist the pressure to expand capacity beyond what can be supported with adequate mentorship and resources.

The second risk is that the field grows too slowly to sustain the institutional infrastructure that the preceding papers have specified. A flagship journal needs a steady flow of submissions and reviewers; a registry needs entries; a conference needs attendees. If the workforce does not grow to support these activities, the institutional infrastructure becomes a burden rather than a foundation. The corrective is to plan the institutional infrastructure with realistic estimates of workforce growth and to scale the infrastructure as the workforce develops.

The third risk is that the field’s workforce becomes demographically narrow, drawing scholars predominantly from particular institutions, particular regions, particular socioeconomic backgrounds, or particular substantive areas. The narrowness would compromise the field’s intellectual breadth and its credibility in addressing the equity concerns that some of its work bears on. The corrective is deliberate attention to recruitment, to fellowship targeting, and to the institutional accessibility of the training programs, with the attention sustained over the long period required for demographic patterns to change.

The fourth risk is the one that connects most directly to the field’s mission. A field that examines the patterns of attention in scholarly inquiry must be willing to examine its own attention patterns, and the workforce strategy must include the reflexive commitments that allow such self-examination. The reflexive work is uncomfortable and tends to be deferred, and the field’s professional norms must explicitly support it. Paper 8 takes up the broader question of the field’s reflexive identity, and the workforce strategy must be designed in coordination with that work.

9. Conclusion

This paper has proposed a workforce strategy for neglect studies that combines demanding skill requirements with explicit mechanisms for managing career risk, multiple training pathways with structures for mentorship and community formation, the inclusion of practitioners and lived-experience contributors as full members of the scholarly community, and the gradual development of professional infrastructure with caution about premature consolidation. The strategy operates on a longer timeline than the other elements of the field’s founding work, and its results become visible only after the field’s first decade has substantially passed.

The workforce strategy cannot be implemented without the institutional infrastructure of Paper 3, the funding of Paper 4, and the data resources of Paper 5. The strategy presupposes the methodological standards of Paper 2 and the conceptual framework of Paper 1. The interdependencies remain the dominant feature of the series, and the workforce paper is no exception.

Paper 7 takes up the field’s engagement with the research-governing institutions whose decisions the field exists to inform. The workforce that this paper has discussed is the workforce that will conduct that engagement, and the design of the engagement structures must accordingly attend to who is available to staff them.


Notes

[^1]: The literature on the career challenges of interdisciplinary scholars is reviewed in Rhoten and Parker (2004) and in the more recent treatment by Leahey, Beckman, and Stanko (2017). The findings are consistent across multiple settings: interdisciplinary work tends to have higher long-term impact but lower short-term recognition, with the career consequences falling more heavily on early-career scholars.

[^2]: The joint-appointment literature is partly empirical and partly practical; Pfirman and Martin (2010) provides one useful treatment, and the operational documents of comparable interdisciplinary fields provide additional guidance.

[^3]: The literature on doctoral training in interdisciplinary fields, introduced in Paper 3, develops further in Holley (2009, 2017) and in the subsequent work on the structure and outcomes of cross-disciplinary doctoral programs.

[^4]: The patient and public involvement literature, cited in Papers 2 and 4, provides extensive treatment of the inclusion of non-academic contributors in research; see Manafò, Petermann, Vandall-Walker, and Mason-Lai (2018) and the broader literature it reviews.

[^5]: The literature on professional associations in emerging fields is largely practical, drawing on the histories of comparable cases; the institutional documents of recently established interdisciplinary associations (the Society for the History of Recent Social Science, the Metascience Society) provide useful templates.

[^6]: The credentialing literature in adjacent fields is reviewed in Lester (2009) and in the subsequent work on professional certification in interdisciplinary contexts. The trade-offs the literature identifies are consistent with the analysis in section 7.2 above.


References

Holley, K. A. (2009). Understanding interdisciplinary challenges and opportunities in higher education. ASHE Higher Education Report, 35(2). Jossey-Bass.

Holley, K. A. (2017). Interdisciplinary curriculum and learning in higher education. In Oxford research encyclopedia of education. Oxford University Press.

Leahey, E., Beckman, C. M., & Stanko, T. L. (2017). Prominent but less productive: The impact of interdisciplinarity on scientists’ research. Administrative Science Quarterly, 62(1), 105–139.

Lester, S. (2009). Routes to qualified status: Practices and trends among UK professional bodies. Studies in Higher Education, 34(2), 223–236.

Manafò, E., Petermann, L., Vandall-Walker, V., & Mason-Lai, P. (2018). Patient and public engagement in priority setting: A systematic rapid review of the literature. PLOS ONE, 13(3), e0193579.

Pfirman, S., & Martin, P. (2010). Facilitating interdisciplinary scholars. In R. Frodeman, J. T. Klein, & C. Mitcham (Eds.), The Oxford handbook of interdisciplinarity (pp. 387–403). Oxford University Press.

Rhoten, D., & Parker, A. (2004). Risks and rewards of an interdisciplinary research path. Science, 306(5704), 2046.


Posted in Musings | Tagged , , , | Leave a comment

Data Infrastructure: Mapping the Negative Space


Executive Summary

This paper addresses the data infrastructure that neglect studies will require to function as a mature field. The premise is that a discipline whose central business is the systematic identification of underexplored questions cannot proceed on the basis of impressionistic claims alone; it requires standing data resources that allow scholars to map the distribution of attention across the research landscape, to identify candidate cases of neglect with defensible empirical grounding, and to track changes over time. The infrastructure must serve both the field’s internal research agenda and its applied engagement with research-governing institutions, and it must do so under constraints — coverage biases in existing databases, the difficulty of measuring absence rather than presence, the ethical complications of qualitative data — that the field shares with adjacent enterprises but that it must address with its own resources.

The paper develops six proposals. The first is a standing public dashboard of research distribution, drawing on existing bibliometric infrastructure but inverting the standard analytical questions to produce maps of negative space. The second is a structured registry of neglected questions, building on the proposal introduced in Paper 3 but specified here in greater detail as a data resource. The third is an archive of abandoned research programs, preserving documentation that would otherwise be lost as founding scholars retire and institutional records are discarded. The fourth is a set of common data elements — shared definitions, instruments, and codebooks — that allow studies conducted by different scholars in different settings to produce findings that aggregate meaningfully. The fifth is a federated data-access model that respects the privacy, intellectual property, and political constraints under which different data sources operate. The sixth is a qualitative archive of oral histories, interviews, and ethnographic materials that preserve the tacit knowledge of scholars whose careers have included engagement with neglected questions.

The paper concludes with a discussion of governance, sustainability, and the relationship between the field’s data infrastructure and the broader open-science movement on which much of it will depend.


1. Introduction

The methodological argument of Paper 2 was that no single method is adequate to support rigorous claims about neglect and that the field’s standards should require triangulation across methods. The institutional argument of Paper 3 was that the field requires academic infrastructure within which methodologically serious work can be conducted. This paper takes up what stands between them: the data resources on which methodologically serious work depends.

The argument can be stated simply. Bibliometric methods require databases. Expert elicitation requires panels and protocols. Historical recovery requires archives. Comparative analysis requires harmonized data across systems. Counterfactual reasoning requires structured estimates and the documentation of their assumptions. Qualitative methods require interview transcripts, field notes, and the apparatus for preserving them ethically. None of this infrastructure exists in a form designed for neglect-studies purposes, and the field’s capacity to produce credible findings depends on building it.

The argument is complicated by three features of the field’s situation. The first is that the data infrastructure required is partly continuous with existing scholarly resources — the standard bibliometric databases, the institutional archives of universities and funders, the established protocols of qualitative research — and partly distinctive, in that it must support analyses of absence rather than presence. The continuity allows the field to draw on substantial existing investment; the distinctiveness means that the field cannot simply inherit existing infrastructure and must adapt or build what is missing. The second is that data infrastructure is expensive to build and to maintain, and the field’s funding situation (Paper 4) makes choices about which resources to prioritize consequential. The third is that the data the field needs is held in many different places under many different governance arrangements, and the work of integration is at least as substantial as the work of original data collection.

The paper proceeds through six proposals, organized roughly by the maturity of the underlying infrastructure. The dashboard and the registry can be built relatively quickly on existing foundations. The abandoned-programs archive and the qualitative archive require more original work. The common data elements and the federated access model are coordination problems whose solutions depend on the field’s relationships with adjacent enterprises rather than on its own resources alone.

2. A Standing Dashboard of Research Distribution

The first proposal is a publicly accessible dashboard that visualizes the distribution of research attention across topics, populations, regions, methods, and disciplines, with the structure of the dashboard designed to make patterns of apparent neglect visible.

The technical foundation already exists. Open bibliometric databases — OpenAlex is the most ambitious, with several others operating on smaller scales — provide structured data on publications, citations, authors, and institutional affiliations at the scale required.[^1] The major commercial databases (Web of Science, Scopus, Dimensions) provide comparable data with different coverage profiles and licensing conditions. Topic-modeling and clustering tools developed in computational social science allow the data to be organized into meaningful categories.[^2] Visualization platforms developed in the broader open-data movement provide the user-facing layer.

What is missing is the integration of these resources into a single tool designed for neglect-studies questions. The standard bibliometric tools are designed to map what is being studied; the proposed dashboard would invert the standard analytical move to map what is conspicuously absent given the structure of adjacent literatures, the distribution of relevant external indicators, or historical baselines.

The dashboard’s design should include several specific features. It should allow users to specify a topic, discipline, or research area and to receive structured visualizations of how attention to that area has been distributed over time, across regions, across institutional types, and across methodological approaches. It should allow users to compare the distribution of attention to the distribution of external indicators of importance — disease burden for medical topics, environmental exposure for environmental topics, demographic significance for population-related topics — with explicit acknowledgment of the imperfections of the comparison. It should allow users to identify topics within a discipline that have unusual patterns of attention given the structure of the surrounding literature, with the unusualness flagged as a candidate for further investigation rather than as a conclusion. And it should allow users to track changes in attention patterns over time, both to identify areas where attention has been declining and to identify areas where previously neglected questions have begun to receive sustained attention.

The dashboard should not produce conclusive identifications of neglect. The tiered evidence standard of Paper 2 places dashboard findings at the exploratory level, as candidate hypotheses for further investigation rather than as substantiated claims. The dashboard should communicate this status to its users explicitly, with documentation of the limitations of the underlying data, the assumptions built into the visualizations, and the methodological caveats that apply to the patterns it displays. The communication is important: a tool that produces visually compelling representations of apparent neglect will be used by audiences who have not read the methodological caveats, and the dashboard’s credibility depends on its design making the limitations as visible as the findings.

The governance of the dashboard requires deliberate attention. The technical operation can reasonably be hosted by one of the founding centers, but the editorial decisions — which databases to include, how to structure topic classifications, how to handle the coverage biases of the underlying data — affect what users see and should be made through a process that includes diverse perspectives within the field. A standing editorial committee, with rotating membership and explicit responsibility for the dashboard’s accuracy and balance, is the appropriate structure. The committee should publish its methodological decisions, should respond to documented errors in the dashboard’s outputs, and should commission periodic external audits of the dashboard’s design.

The sustainability of the dashboard is the most difficult question. Open data resources of comparable scale have generally been funded through some combination of foundation support, institutional underwriting, and modest revenue from premium access for institutional users. The dashboard’s funding model should be settled at its launch and should include explicit provisions for what happens if any of its funding sources is withdrawn. A dashboard that ceases operation after five years because its initial grant ends would damage the field’s credibility more than no dashboard at all, and the funding planning should accordingly be conservative.

3. A Registry of Neglected Questions

The registry of neglected questions was introduced in Paper 3 as a companion publication outlet to the flagship journal. This section specifies its design as a data resource.

The registry’s central function is to provide a structured, citable, and updatable record of cases in which scholars have identified questions, populations, methods, or areas as neglected, with the documentation appropriate to the tier of evidence on which the identification rests. Each registry entry should include: the question or area identified; the discipline or disciplines to which it belongs; the category of neglect (in the Paper 1 taxonomy) to which the identification primarily applies; the methodological approach used to substantiate the identification; the evidence tier (exploratory, substantiated, rigorous) at which the identification stands; the scholars responsible for the identification; the date of submission and the dates of subsequent updates; pointers to relevant literature; and an open field for commentary, including the documentation of subsequent work that has either substantiated or revised the original identification.

The registry serves several functions that distinguish it from a journal. It supports cumulative knowledge in cases where individual contributions are too small to support full articles but where the collective contribution is substantial. It allows updating, so that a registry entry can be revised as new evidence emerges or as subsequent work changes the picture. It allows linking, so that an entry can document the relationships among related cases of neglect rather than treating each in isolation. And it produces a public-facing record that funders, learned societies, and policy bodies can consult, in a form that is more accessible to non-specialists than the journal literature alone.

The quality control problem is the same one identified in Paper 3 and bears restating here in the context of the data design. An open submission system will receive contributions that do not meet the field’s standards, and a registry that becomes a repository for unsubstantiated claims will damage the field’s credibility more comprehensively than the dashboard would, because the registry’s structured format gives entries a status that visualizations of bibliometric data do not. The proposed quality control combines three elements. First, submissions must include documentation appropriate to the evidence tier claimed, and the documentation requirements should be specified explicitly in the registry’s submission guidelines. Second, an editorial committee should review submissions before publication, with the review focused on whether the documentation supports the tier claimed rather than on the substantive merit of the identification. Third, the registry interface should display the evidence tier prominently alongside each entry, so that users can calibrate their interpretation accordingly.

A specific design question concerns the registry’s handling of contested cases. Some identifications of neglect will be contested either at the time of submission or subsequently, with different scholars reaching different conclusions about whether a particular case is genuinely neglected or appropriately deprioritized. The registry should accommodate such cases by allowing entries to include linked dissenting opinions, with the dissents documented to the same standards as the original entries. The structure communicates to users that the field’s claims are open to revision and that the registry preserves rather than suppresses methodological disagreement. The structure also creates a venue for the kind of structured argumentative engagement that produces the field’s most rigorous work over time.

The relationship between the registry and the flagship journal requires attention. The two are complementary rather than redundant: the journal publishes the methodological development, the case studies, and the field-wide analyses that contextualize particular registry entries, while the registry preserves the structured record of identifications themselves. A scholar who has identified a neglected question and produced a journal article about it should be expected to deposit a corresponding registry entry, both to make the identification accessible to subsequent scholars and to allow the entry to be updated as the case develops over time.

4. An Archive of Abandoned Research Programs

The third proposal is an archive specifically devoted to abandoned research programs — the category of neglect introduced in Paper 1 and developed methodologically in Paper 2. The case for the archive rests on the observation that the documentation of abandoned programs is unusually vulnerable to loss. When a research program is active, its documentation accumulates in journal literatures, institutional records, and the working files of active researchers. When the program is abandoned, its journal literature becomes increasingly difficult to locate, institutional records are discarded according to standard retention schedules, and the working files of researchers are lost when those researchers retire, die, or move on to other work. The window for recovering the documentation of an abandoned program closes within a generation or two of the abandonment, and once it closes the recovery becomes substantially harder.

The archive should be conceived as a federated rather than a centralized resource. Centralizing the physical collections of multiple abandoned programs at a single institution would be impractical and probably inappropriate, since the materials are often located at the institutions where the work was conducted and where local custodial expertise exists. The federated model treats the archive as a directory and integration layer that points to the holdings of multiple host institutions, that ensures consistent metadata across the holdings, and that supports cross-collection searching and analysis.

The archive’s metadata standard should include: the research program identified; the period of its active operation; the disciplinary location of the work; the reasons for abandonment, as documented in available sources; the institutions and individuals associated with the program; the holdings related to the program (publications, archival materials, working files, datasets, instruments) and the institutions where they are located; the contact information for the custodians of those holdings; and pointers to subsequent work on the program, whether by historians, by neglect-studies scholars, or by researchers attempting revival.

A specific component of the archive should be devoted to the oral histories of scholars whose careers included involvement in abandoned programs. The oral-history component is discussed in greater detail in section 7 below, but the connection to the abandoned-programs archive deserves note here. Many abandoned programs have living participants whose tacit knowledge would be lost without deliberate preservation, and the cost-benefit calculation for oral-history work strongly favors moving on the work before the participants are no longer available rather than waiting until the field’s other priorities are settled.

The archive should be developed in stages, beginning with a small number of well-documented cases that the founding centers identify as priorities. The first decade’s work should aim for perhaps twenty to forty programs documented to the standard described above, with the cases selected for their illustrative value, their historical interest, and the availability of participants for oral-history work. The selection process should be transparent, with the criteria published and the choices documented, so that subsequent scholars can both build on the early work and contribute additional cases through whatever scholarly community develops around the archive.

The relationship between the archive and the existing infrastructure for the history of science deserves explicit treatment. Several major repositories — the American Institute of Physics’s Niels Bohr Library, the Wellcome Trust’s archives, the Charles Babbage Institute, and others — already preserve materials related to scientific work, including some materials related to programs that subsequently abandoned. The proposed archive should not duplicate this work but should integrate with it, providing the metadata layer that allows existing holdings to be discovered and used for neglect-studies purposes. The integration work requires collaboration with the existing repositories and respect for their custodial responsibilities, and the archive’s design should make the collaboration straightforward rather than imposing additional burdens on partners whose own missions are different.

5. Common Data Elements

The fourth proposal addresses a coordination problem rather than a single resource. The methodological pluralism of the field (Paper 2) means that studies using different methods will produce findings whose aggregation requires shared definitions, instruments, and codebooks. Without such shared elements, the field’s outputs will be a collection of one-off studies whose findings cannot be combined to produce field-wide understanding, and the cumulative-knowledge function that distinguishes a mature field from a collection of individual scholars will be defeated.

The common data elements (CDEs) that the field requires fall into several categories.

The first is shared definitions for the categories of neglect introduced in Paper 1. The taxonomy proposed there — orphan topics, abandoned research programs, methodologically inaccessible questions, interstitial questions, peripheral inquiries, sensitive questions, and constituency-less questions — provides a starting vocabulary, but the categories require operationalization before they can be applied consistently across studies. The operationalization work should produce definitions specific enough to allow different scholars to classify cases the same way, while remaining flexible enough to accommodate the variation across cases that the categories are intended to capture. The work should be conducted through the kind of consensus-building process that Paper 1 introduced and that subsequent papers will need to specify further.

The second is shared instruments for the methodological approaches of Paper 2. Expert elicitation in particular benefits substantially from standardized protocols, both because the protocol design affects the findings in well-documented ways and because comparison across studies depends on the protocols being similar enough to allow meaningful comparison. The James Lind Alliance methodology provides a partial model,[^3] but the model is specific to health-research priority-setting and requires adaptation for the broader range of cases that neglect studies must address. Similar standardization work is needed for the documentation requirements of bibliometric mapping, for the protocols of historical recovery, and for the structure of counterfactual estimation.

The third is shared codebooks for the variables that recur across studies. Bibliometric studies routinely use measures of citation, of collaboration, of geographic distribution, and of disciplinary classification, and the comparison across studies depends on the measures being defined consistently. The existing literature on bibliometric methods provides considerable standardization, but neglect-studies applications may require extensions — for example, measures of attention to particular categories of questions, measures of the rate of change in attention over time, measures of the cross-disciplinary scope of attention — that are not yet standardized.

The development of CDEs is itself a substantial scholarly activity that requires its own resources. The work should be coordinated through one of the founding centers, with input from the field’s broader community through the structures (journal, conferences, registry) that Paper 3 established. The CDE work should be conducted transparently, with proposed elements published for community comment before adoption, and the adopted elements should be revised periodically as the field’s experience accumulates.

A specific consideration is the relationship between the field’s CDEs and those developed in adjacent fields. The CDE work in clinical research, in social science research, and in metascience has produced substantial infrastructure that the field can adapt rather than building from scratch.[^4] The adaptations require domain-specific work but the underlying methodological standards translate well, and the field’s CDE development should proceed in conversation with the adjacent efforts rather than in isolation from them.

6. Federated Access and Data Governance

The fifth proposal addresses the data-governance challenges that arise when the field’s research requires access to data held by multiple parties under multiple governance arrangements. The challenges are well documented in adjacent fields — health research, social-science survey data, education research — and the field can draw on established models rather than developing new approaches.[^5]

The relevant data types include: bibliometric data held by commercial and open providers under varying licensing terms; funding data held by agencies under varying disclosure policies; institutional data held by universities under privacy and competitive considerations; qualitative data held by individual scholars and subject to research-ethics review; and personal data — including the identities of scholars who have made career choices around particular questions — subject to the strictest protections.

The federated access model treats the data as remaining under the control of its original custodians, with access provided through standardized interfaces that allow analyses to be conducted without the data being transferred. The model is more complex to implement than centralized data deposits but has corresponding advantages: it preserves the data custodians’ control over their data, it accommodates the varying governance arrangements under which different data are held, and it allows analyses to be conducted on data that could not be released for centralization under any plausible governance arrangement.

The implementation of federated access requires technical infrastructure (standardized query interfaces, audit logs, computational environments that allow analysis without data transfer) and governance infrastructure (data-sharing agreements, access procedures, dispute-resolution mechanisms). Both have been developed in adjacent fields and the field’s implementation should draw on those models. The technical infrastructure can reasonably be hosted by one of the founding centers, with the governance infrastructure developed through agreements with the data custodians whose holdings are most important for the field’s work.

A specific governance question concerns the access of the field’s scholars to data that documents funding decisions. The funding agencies whose decision-making the field studies are also potential funders of the field’s work (Paper 4), and the access arrangements must be designed to preserve the field’s analytical independence. The arrangements that have worked in adjacent contexts include time delays between the decisions and the data release (typically five to ten years), aggregation requirements that prevent identification of individual decisions, and the involvement of the funders in the design of the access arrangements without involvement in the specific analyses that the data support. The field’s data-governance work should draw on these precedents and should be explicit about the protections that preserve analytical independence.

A separate governance question concerns the data that documents the experiences of individual scholars — interview transcripts, narrative accounts of career decisions, ethnographic field notes — that bear on the mechanisms of neglect. The data is sensitive both because it identifies individuals and because some of what it documents bears on professional reputations and institutional relationships. The governance arrangements for this category of data should follow the established standards of qualitative research ethics, with informed consent procedures that explicitly address the possibility of identification, with secure storage arrangements, and with disclosure standards that prioritize the protection of participants over the convenience of subsequent scholars. The standards are well established in adjacent fields and require adaptation rather than original development.

7. A Qualitative Archive

The sixth proposal is an archive of qualitative materials — oral histories, interview transcripts, ethnographic field notes — that document the tacit knowledge of scholars whose careers have included engagement with neglected questions. The archive’s case rests on the observation that much of the most important knowledge about how neglect operates is held by individual scholars in forms that are not preserved by ordinary scholarly publication. Scholars who have considered pursuing particular questions and decided not to, scholars who have pursued such questions and paid professional costs, scholars who have participated in the founding or the abandonment of research programs, and scholars who have served as program officers or peer reviewers whose decisions shaped the distribution of attention: all of them hold knowledge that the field’s understanding of its subject depends on, and all of them are subject to the ordinary attrition of careers and lives.

The archive should be developed in stages, with the first stage prioritizing the scholars whose careers are nearing their end and whose loss would be most consequential. The selection process should draw on multiple sources of identification: the field’s professional community, the historians of relevant disciplines, the institutional knowledge of the founding centers, and the suggestions of scholars who are themselves interviewed and who can identify others whose accounts would be valuable.

The methodology of the oral-history work should draw on the established traditions of the history of science and of qualitative research in adjacent fields.[^6] The interviews should be conducted by scholars trained in oral-history methods, should be transcribed and reviewed by the interview subjects before being archived, and should be subject to access conditions that respect the subjects’ interests in how their accounts are used. The transcripts should be deposited in repositories that can preserve them for the long term, with metadata that allows them to be discovered by subsequent scholars.

A specific consideration is the relationship between the qualitative archive and the field’s analytic work. The interview transcripts are primary sources rather than analyses, and their value to the field comes from their availability for analysis by scholars whose questions cannot be specified in advance. The archive’s design should accordingly emphasize preservation and discoverability rather than the particular analyses that motivate the initial collection. The archive should outlast the founding scholars of the field, and the materials it preserves should be available to scholars whose work the founding scholars cannot anticipate.

The qualitative archive’s relationship to research ethics deserves particular attention. The interviews will sometimes document practices and decisions that the subjects’ institutions or professional communities would prefer not to be public. The standards under which the interviews are conducted must include explicit protections for the subjects, including the option to restrict access to particular passages, to redact identifying information about third parties mentioned in the interviews, and to embargo materials for periods that the subjects specify. The standards should be developed in consultation with research-ethics committees and with the established oral-history community, and they should be documented in the archive’s published policies so that subjects can make informed decisions about participation.

8. Governance, Sustainability, and Open Science

The data infrastructure outlined above is substantial, and its governance and sustainability deserve explicit treatment.

The governance principle that runs through the proposals is that the data infrastructure should be governed by structures that include diverse perspectives within the field, that operate transparently, and that are accountable to the field’s professional community rather than to any single institution or funder. The principle requires explicit institutional design: standing committees with rotating membership, published procedures, regular reporting to the field’s community, and external review on a defined schedule. The institutional design should be settled at the founding of each data resource rather than developed in response to subsequent controversies, since governance arrangements established under pressure tend to be less stable than those established at the outset.

The sustainability of the infrastructure is the more difficult problem. Data resources have well-known patterns of decay: initial funding supports launch and early operation, but the long-term maintenance that determines whether the resource remains useful is harder to fund and often falls short. The field’s data infrastructure should be planned with sustainability in mind from the start, with explicit consideration of the operating costs after the launch funding ends, of the institutional commitments required to sustain operation, and of the contingency plans for what happens if particular funding sources are withdrawn. A data resource that becomes inaccessible or outdated after five years is sometimes worse than no resource at all, because users have come to depend on it in ways that the failure disrupts. The sustainability planning should be conservative.

The broader open-science movement provides important context for the field’s data infrastructure work. The movement has produced infrastructure (open repositories, persistent identifiers, standardized metadata, open licenses) on which the field can draw, and the movement’s standards have set the broader expectations for how scholarly data resources should operate.[^7] The field’s data infrastructure should align with open-science standards as a default, with departures from the standards justified explicitly when they are required by considerations specific to the field’s situation. The alignment serves both the field’s intrinsic commitments to the redistribution of attention — which sit poorly with restrictive access to its own outputs — and its practical interests in connecting with the broader research community.

A specific consideration is the field’s potential contribution to open-science infrastructure rather than just its consumption of that infrastructure. The methodological work on negative-space analysis, the development of common data elements for the field’s distinctive analyses, the qualitative archive’s preservation methodology: all of these have potential applications beyond the field’s own work and should be developed in ways that allow other scholars to use them. The contribution serves the field’s broader influence and provides a non-financial form of return on the open-science movement’s support.

9. Conclusion

This paper has proposed six elements of data infrastructure for neglect studies: a dashboard of research distribution, a registry of neglected questions, an archive of abandoned research programs, common data elements, federated access arrangements, and a qualitative archive. Each rests on existing scholarly resources and requires adaptation or extension for the field’s specific purposes. Each has governance and sustainability requirements that the field’s founding scholars must address explicitly. Each connects the field to adjacent enterprises whose collaboration the field will require.

The data infrastructure cannot be built without funding (Paper 4) and without the institutional homes that Paper 3 specified. It cannot be used effectively without the methodological standards of Paper 2 and the conceptual framework of Paper 1. The interdependencies are why the series has presented the elements in their current order: each paper presupposes the ones before it, and the data infrastructure proposed here is the operational expression of the commitments that the earlier papers articulated.

The papers that follow take up the workforce that will use the infrastructure (Paper 6), the research-governing institutions whose decisions the infrastructure exists to inform (Paper 7), and the field’s identity and self-critique (Paper 8). The data infrastructure paper is in some ways the most concrete of the series, and its proposals are accordingly the most easily evaluated. The field’s eventual success will be measurable in part by whether the resources proposed here exist in working form a decade after the field’s founding, and whether they are used by scholars whose work is improved by them.


Notes

[^1]: OpenAlex, launched in 2022 as a successor to Microsoft Academic Graph, has become the most widely used open bibliometric database. Its coverage, structure, and limitations are documented in Priem, Piwowar, and Orr (2022) and in the database’s own ongoing documentation. The comparison among the major bibliometric databases is reviewed in Visser, van Eck, and Waltman (2021).

[^2]: The topic-modeling literature relevant to bibliometric analysis is reviewed in Boyack and Klavans (2014), with subsequent methodological developments documented in the broader computational social science literature.

[^3]: The James Lind Alliance methodology has been adopted with variations by priority-setting partnerships in many fields, and the variations are themselves documented (Crowe et al., 2015; Cowan & Oliver, 2021). The methodology’s specific provisions for protocol standardization are part of what has allowed comparison across partnerships.

[^4]: The clinical research literature on common data elements is the most developed; the National Institutes of Health maintains the NIH CDE Repository as a coordinating resource. Sheehan et al. (2016) provides a useful overview of the principles. Adjacent work in social science research has been developed through the Data Documentation Initiative and the Inter-university Consortium for Political and Social Research.

[^5]: The federated data-access model has been developed extensively in health research, with the Observational Health Data Sciences and Informatics network providing one well-documented implementation. Voss et al. (2015) provides an overview. Similar approaches have been developed in social science research and in education research, with the European Data Infrastructure for the social sciences offering a comparable model.

[^6]: The oral-history of science tradition is documented through the American Institute of Physics’s program, the Royal Society’s archives, and several other institutional efforts. The methodological standards are reviewed in Doel and Söderqvist (2006).

[^7]: The open-science literature has grown substantially over the past two decades; Vicente-Saez and Martinez-Fuentes (2018) provides a review of the conceptual development, and the FAIR principles articulated in Wilkinson et al. (2016) have become a widely adopted standard for data resources.


References

Boyack, K. W., & Klavans, R. (2014). Creation of a highly detailed, dynamic, global model and map of science. Journal of the Association for Information Science and Technology, 65(4), 670–685.

Cowan, K., & Oliver, S. (2021). The James Lind Alliance guidebook (Version 10). James Lind Alliance.

Crowe, S., Fenton, M., Hall, M., Cowan, K., & Chalmers, I. (2015). Patients’, clinicians’ and the research communities’ priorities for treatment research: There is an important mismatch. Research Involvement and Engagement, 1, 2.

Doel, R. E., & Söderqvist, T. (Eds.). (2006). The historiography of contemporary science, technology, and medicine: Writing recent science. Routledge.

Priem, J., Piwowar, H., & Orr, R. (2022). OpenAlex: A fully-open index of scholarly works, authors, venues, institutions, and concepts. arXiv:2205.01833.

Sheehan, J., Hirschfeld, S., Foster, E., Ghitza, U., Goetz, K., Karpinski, J., Lang, L., Moser, R. P., Odenkirchen, J., Reeves, D., Rubinstein, Y., Werner, E., & Huerta, M. (2016). Improving the value of clinical research through the use of Common Data Elements. Clinical Trials, 13(6), 671–676.

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436.

Visser, M., van Eck, N. J., & Waltman, L. (2021). Large-scale comparison of bibliographic data sources: Scopus, Web of Science, Dimensions, Crossref, and Microsoft Academic. Quantitative Science Studies, 2(1), 20–41.

Voss, E. A., Makadia, R., Matcho, A., Ma, Q., Knoll, C., Schuemie, M., DeFalco, F. J., Londhe, A., Zhu, V., & Ryan, P. B. (2015). Feasibility and utility of applications of the common data model to multiple, disparate observational health databases. Journal of the American Medical Informatics Association, 22(3), 553–564.

Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3, 160018.


Posted in Musings | Tagged , , , | Leave a comment

The Funding Ecosystem: Public, Private, and Philanthropic Pipelines for Neglect Studies


Executive Summary

This paper addresses the hardest practical problem the field faces: how a discipline whose central business is to study what institutional science neglects can secure funding from the same institutions whose attention patterns it exists to examine. The problem is not merely awkward. It is structural, and the field’s long-term viability depends on solving it in ways that neither compromise the field’s analytical independence nor leave it perpetually marginal.

The paper develops six arguments. The first is that conventional grant mechanisms tend to reproduce existing research priorities through well-understood reviewer-selection and track-record effects, which means that the standard funding pathway is poorly suited to a field whose work routinely challenges existing priorities. The second is that several alternative funding instruments — lottery allocation, golden-ticket schemes, prize-based funding, retrospective grants, and high-risk programs — have been developed in adjacent contexts and offer partial precedents for what the field needs. The third is that private philanthropy is the most likely first mover for the field, but that exclusive reliance on philanthropic funding creates vulnerabilities that the field’s founding planners should anticipate. The fourth is that public-sector funding can be secured through specific argumentative strategies that have worked for adjacent fields and that draw on the field’s potential contribution to research-policy decisions. The fifth is that institutional endowments — chairs, named centers, and dedicated fellowship programs — provide stability that no other funding source can match and should be priorities for the field’s longer-term consolidation. The sixth is that the field needs its own funded research agenda on the funding system itself, both because the topic is intellectually appropriate to the field and because the findings would inform the field’s own funding strategy.

The paper concludes with a discussion of sequencing and a frank treatment of the failure modes that the field’s funding strategy must anticipate.


1. Introduction

The preceding papers have addressed what neglect studies would study (Paper 1), how it would study it (Paper 2), and the academic infrastructure within which the work would be conducted (Paper 3). All three presupposed that the field would have resources. This paper takes up the question those presuppositions defer.

The funding problem for neglect studies has a structural feature that distinguishes it from the funding problems of most emerging fields. Most new fields can argue that they address questions the existing research system has not yet recognized as important, and the argument is usually made to funders whose own portfolios are not implicated in the critique. A new field of cellular biology in the 1950s could plausibly argue that the existing biological sciences had not yet appreciated the importance of cellular mechanisms, and the argument could be made to funders whose investments in organismal biology were not threatened by the new field’s growth. The new field added to the research system without subtracting from it.

Neglect studies is differently situated. Its central business is the systematic identification of cases in which the existing research system has misallocated attention, and the existing research system is the same system that the field must approach for funding. The asymmetry is not absolute — funders have genuine interests in improving their own decision-making, and the field’s outputs are useful to them on those grounds — but the asymmetry is real, and the field’s funding strategy must take it seriously.

The argument of this paper is not that the funding problem is unsolvable. Comparable problems have been solved for adjacent fields, and the solutions, where examined carefully, suggest pathways that neglect studies can adapt. The argument is rather that the funding problem requires deliberate strategic thinking from the outset, that the strategy must include multiple revenue streams whose vulnerabilities do not correlate, and that the strategy must preserve the analytical independence on which the field’s value to its funders depends.

2. Why Conventional Grant Mechanisms Are Poorly Suited

The mechanisms by which conventional grant programs allocate funding have been studied extensively, and the findings are consistent across multiple fields and funding agencies.[^1] Three features of standard peer-reviewed grant programs are particularly relevant to the prospects for neglect-studies funding.

The first is reviewer selection. Grant reviewers are drawn from the established researchers in a field, which means that the questions reviewers find compelling are the questions the field is already pursuing. Proposals that depart from established questions face a structural disadvantage: reviewers must understand the proposal’s framing well enough to evaluate it, and proposals framed around questions the reviewers do not recognize as important are systematically rated lower than proposals framed around established questions. The effect is well documented in studies of grant review across multiple agencies, and it operates without any individual reviewer behaving in bad faith.

The second is track record. Grant applications are evaluated in part on the applicant’s record of previous work, and the standard measures of that record — prior publications in major outlets, prior grant funding, citation counts — are themselves shaped by the same attention patterns the field is trying to study. A scholar who has built a career working on a neglected question will have a record that looks weaker by standard measures than a scholar working on an established question, even when the substantive quality of the work is comparable. The applicant’s record problem compounds the reviewer problem, with the result that scholars working on neglected questions face two reinforcing structural disadvantages in the standard grant system.

The third is project specificity. Conventional grants require applicants to specify what they will do, what they expect to find, and what methods they will use, in sufficient detail for reviewers to assess feasibility. The specifications work well for projects that build on established methods and address established questions, but they work poorly for projects that propose to develop new methods, to identify what is not yet recognized, or to recover lines of inquiry whose specifics cannot be fully specified before the recovery work has been done. Several of the methodological approaches outlined in Paper 2 — historical recovery, exploratory bibliometric mapping, expert elicitation in areas without established expert communities — fit poorly into the conventional grant template.

The implication is not that conventional grants are useless to the field. Established scholars working on substantiated cases of neglect, using mature methods, can compete for conventional funding successfully, and the field should pursue such funding actively. The implication is rather that conventional grants alone are insufficient, particularly for the field’s exploratory and methodologically innovative work, and that the funding strategy must include mechanisms designed for the kinds of projects that the conventional system is structurally poor at supporting.

3. Alternative Funding Instruments

Several alternative funding instruments have been developed in adjacent contexts and offer partial precedents for what neglect studies needs. Each is reviewed below with its strengths, limitations, and applicability to the field.

3.1 Lottery Allocation

Lottery allocation, in which grants are awarded by random selection from a pool of proposals that meet a threshold quality standard, has been piloted in several settings, including the Health Research Council of New Zealand’s explorer grants and several European national funding agencies’ experimental programs.[^2] The argument for lottery allocation is that the precision of conventional peer review is lower than its formal status suggests, that the costs of preparing applications and conducting reviews are substantial, and that random selection from a quality-screened pool produces outcomes nearly as good as conventional peer review at much lower administrative cost.

For neglect studies, lottery allocation has a specific additional appeal. The reviewer-selection and track-record effects that disadvantage proposals on neglected questions in conventional peer review are removed once a proposal has cleared the quality threshold, because the final selection among qualifying proposals is random rather than based on reviewer preferences. The field would benefit if lottery mechanisms became more widespread in research funding, and the field’s scholars should both contribute to the methodological evaluation of lottery programs and advocate for their expansion where evidence supports it.

The limitations are that lottery allocation depends on a quality-screening step whose criteria are themselves vulnerable to the same biases as conventional peer review, that the lottery component is politically controversial and slow to gain institutional acceptance, and that the field cannot rely on lottery mechanisms in the short term because the existing programs are small and the expansion of the model is uncertain.

3.2 Golden-Ticket Schemes

A golden-ticket scheme allows a small number of senior researchers — typically program officers, distinguished scientists, or members of a standing advisory committee — to fund a small number of projects per year without going through the standard review process. The Volkswagen Foundation’s experimental program along these lines has been documented, and similar mechanisms exist in several national agencies.[^3] The mechanism allows projects that conventional review would reject to receive funding when a credible senior figure judges them worth supporting.

For neglect studies, the golden-ticket mechanism is particularly well-suited to the field’s exploratory and high-risk work. A program officer who understands the field can use a golden-ticket allocation to fund a historical recovery project that conventional review would find insufficiently specified, or a methodological development project whose returns are uncertain. The mechanism trades some accountability for some flexibility, and the trade is worth making for a fraction of the funding portfolio.

The limitations are that golden-ticket schemes depend on the program officer’s judgment being good, that they create accountability concerns when projects fail, and that they are politically vulnerable to criticism as patronage. The field should advocate for golden-ticket mechanisms in research funding generally, and should encourage funders that adopt them to allocate some of the tickets to neglect-studies projects, but should not depend on the mechanism as a primary funding source.

3.3 Prize-Based Funding

Prize-based funding allocates resources retrospectively, by rewarding work that has been completed and judged successful, rather than prospectively, by funding work that has been proposed. The mechanism has been developed extensively in technology development — the X Prize family is the most visible example — and more recently in basic research through programs like the Breakthrough Prizes.[^4] Prize-based funding has the advantage of selecting on demonstrated quality rather than on the persuasiveness of an application, and the disadvantage of requiring researchers to fund the underlying work before the prize is available.

For neglect studies, prize-based funding can serve as a complement to other mechanisms rather than as a primary source. A prize for the best historical recovery of an abandoned research program, or for the best methodological contribution to bibliometric mapping of neglected areas, would incentivize work in the field’s priority areas without requiring the prize-awarding body to evaluate proposals. Prize programs are relatively straightforward to administer and can be funded by individual donors who want to support the field but who are not positioned to fund larger programs.

The limitations are familiar: prizes work best when the criteria for excellence are clear and when the prize amount is large enough to motivate substantial work. The field’s prize programs should be designed with both criteria in mind and should be understood as supplements to rather than substitutes for the main funding pathways.

3.4 Retrospective Funding for Previously Unfunded Work

A specific variant of prize-based funding deserves separate treatment. Retrospective funding programs reimburse researchers for work they have completed without prior funding, either through full grants or through smaller supplementary awards. The mechanism addresses a problem specific to scholars who have worked on neglected questions: the work has often been done on personal time, in unfunded projects, or as a side effort within larger funded programs whose primary purpose was something else.

For neglect studies, retrospective funding has two attractions. It rewards scholars who have already invested in neglected work without the institutional support that the field aims to build, which is a way of honoring the field’s intellectual debt to its precursors. And it identifies the scholars whose careers and recent work make them natural participants in the field’s consolidation, which serves the field’s community-building goals. A modest retrospective-funding program, administered by one of the field’s founding centers, would be a useful element of the funding mix.

3.5 High-Risk Programs

Several major funders have established programs specifically for high-risk research that conventional review would reject. The U.S. National Institutes of Health’s Director’s Pioneer Award, the National Science Foundation’s EAGER mechanism, the European Research Council’s various advanced grant programs, and the Defense Advanced Research Projects Agency’s program structure all represent different attempts to address the problem that conventional review is conservative.[^5] The programs vary in size, in the breadth of fields they support, and in the criteria they use to identify high-risk work.

For neglect studies, the high-risk programs are partially relevant. The field’s work is not high-risk in the same sense that a moonshot biological experiment is — most neglect-studies projects have modest budgets and predictable methods — but the field’s exploratory work shares with high-risk research the feature that its specific outputs cannot be predicted in advance. The argument for funding the field through high-risk programs has been made successfully in several adjacent metascience contexts, and the same argument should be applicable for neglect studies.

The limitations are that high-risk programs are competitive, that they have their own selection biases (often favoring established researchers whose track records make the risk seem worth taking), and that the funding amounts are typically larger than neglect-studies projects require. The mechanism is useful for the field’s larger and more ambitious projects but is not the natural home for the field’s bread-and-butter work.

4. Private Philanthropy as the Likely First Mover

The case for private philanthropy as the field’s most important early funding source rests on three considerations. The first is that foundations are structurally less constrained than public funders to align their portfolios with established research priorities. A foundation whose mission includes epistemic diversity, open science, or research reform can fund neglect studies without justifying the funding against the priorities of an established field, in a way that public funders typically cannot. The second is that foundations have shorter decision cycles than public funders, which allows them to respond to emerging fields more quickly. The third is that several foundations have already demonstrated interest in adjacent areas — metascience, open science, research reform, the philosophy of science — that make them natural candidates for engagement with the proposed field.

The foundations most plausibly aligned with the field’s mission include those that have funded the broader metascience and open-science enterprise, those whose missions emphasize the structure of scholarly inquiry, those whose missions emphasize particular substantively neglected areas (women’s health, rare diseases, environmental justice), and the family foundations of donors whose personal histories include direct encounters with neglected questions. The field’s founding strategy should include systematic engagement with foundations in each of these categories, with proposals tailored to the foundation’s particular mission.

The advantages of philanthropic funding come with corresponding vulnerabilities. Foundations are not permanent institutions; their priorities shift with changes in leadership, with the death of founding donors, and with shifts in the broader philanthropic climate. A field that depends on any single foundation’s support is vulnerable to that foundation’s reorientation, and the vulnerability is not theoretical. Several emerging fields in recent decades have been substantially damaged by the loss of foundation support that had been their primary funding base.

The implications for the field’s strategy are several. The field should pursue multiple foundation funders rather than concentrating on any one, even if one foundation is willing to be a primary partner. The field should use philanthropic funding to build infrastructure that can survive the loss of any individual funder — endowments, institutional commitments, training pipelines — rather than to support ongoing operations that would collapse without continued philanthropic support. And the field should be transparent with its philanthropic funders about its long-term ambitions for funding diversification, both because honesty is intrinsically appropriate and because foundations that understand the field’s strategy are more likely to support the diversification effort than foundations that have not been told about it.

5. Public-Sector Funding

The argument for pursuing public-sector funding alongside philanthropic funding is partly about diversification and partly about the specific advantages of public funding. Public funders provide larger and more sustained support than most foundations, they often have longer time horizons, and their funding decisions are more visible, which means that public funding for the field would signal that the field has achieved a level of recognition that purely philanthropic funding does not.

The challenge is that public funders are more constrained than private ones to align their portfolios with established research priorities, and the field’s mission is precisely to question those priorities. The strategy for securing public funding must therefore include arguments that present the field’s work in terms that public funders can accept without committing themselves to positions their political environments would find difficult.

Three argumentative strategies have worked for adjacent fields and should be deployed for neglect studies.

The first is the quality-improvement argument. Public funders have an institutional interest in improving the quality of their own decision-making, and the field’s outputs are directly relevant to that interest. A funding agency that supports research on how its own grant decisions could be improved is not endorsing any particular critique of its existing portfolio; it is investing in its own institutional learning. The argument has worked for the broader metascience enterprise, with substantial public funding flowing to research on peer review, reproducibility, and research evaluation, and the same argument should work for the parts of neglect studies that bear directly on funding-agency decision-making.

The second is the return-on-investment argument. The counterfactual reasoning proposed in Paper 2 produces estimates of the expected returns to research investment, and those estimates can be used to argue that better identification of high-value neglected areas would improve the returns to public research investment overall. The argument is partial — the estimates are uncertain, and public funders are constrained by political and institutional considerations beyond expected value — but it provides a framing in which the field’s work serves rather than challenges the funder’s mission.

The third is the equity argument. Several aspects of neglect studies, particularly the peripheral-inquiries and constituency-less-questions categories of Paper 1, bear directly on questions of equity in research portfolios. Public funders in many jurisdictions have explicit commitments to equity in research, and the field’s outputs are directly relevant to those commitments. The argument has worked for research on health disparities, for the inclusion of underrepresented populations in clinical trials, and for the support of research from underrepresented regions, and the same logic should apply to the field’s work on the structural mechanisms that produce neglect in these areas.

A practical consideration: public funding for the field is more likely to come through programs not specifically designated for it than through dedicated programs. A scholar working on neglect studies methodology may be funded through a metascience program, a program on research policy, a program on a particular substantive area in which neglect studies tools are being applied, or a general program for which the project happens to be competitive. The field’s strategy should accordingly include cultivating program officers across multiple funding lines, building the relationships and the documented track record that allow scholars to apply through whichever programs fit best, rather than waiting for dedicated neglect-studies funding to materialize.

The longer-term goal of dedicated public funding for the field should be pursued but should be understood as a destination rather than a near-term objective. The case for dedicated funding will become more compelling as the field accumulates a track record of useful outputs, and the case should be made through the channels that have worked for adjacent fields — formal reports to advisory bodies, engagement with congressional and parliamentary committees, white papers commissioned by interested agencies, and the gradual accumulation of program officers who understand the field’s work and can advocate internally for dedicated support.

6. Institutional Endowments

The most stable funding source for an academic field is an endowment whose returns support ongoing activity in perpetuity. Endowments take longer to build than other funding mechanisms, and they require donor relationships that are different in character from the foundation relationships discussed above, but they provide a foundation for the field that no other funding source can match.

The forms of endowment most relevant to the field are three.

The first is endowed chairs in neglect studies at the founding centers. An endowed chair provides a permanent faculty position in the field at a major research university, which both supports the individual scholar who holds the chair and signals institutional commitment to the field that other faculty appointments do not match. Endowed chairs typically require donor commitments in the range of two to five million dollars in current U.S. terms, and the relationship-building required to secure such commitments is substantial. The relevant donor population includes individuals whose personal histories include encounters with neglected questions, philanthropists with broader interests in the structure of scholarly inquiry, and the donor advisory committees of foundations whose missions align with the field.

The second is named research centers. A named center, supported by an endowment that funds its operating budget, provides the institutional stability that distinguishes a permanent institution from a project-funded one. Named centers in adjacent fields have typically required donor commitments in the range of ten to twenty-five million dollars, with the variation depending on the center’s size and the cost structure of the host institution. The donor population is similar to that for endowed chairs but is smaller, since fewer donors are positioned to make commitments at the scale required.

The third is dedicated fellowship programs. A named fellowship program supports residential or virtual fellowships for scholars working in the field, and a well-endowed fellowship program can support a steady stream of senior scholars whose participation builds the field’s community even when their primary affiliations are elsewhere. Fellowship endowments are smaller than chair or center endowments — typically one to three million dollars per fellowship line — and can be built incrementally as donor relationships develop.

The field’s endowment strategy should be pursued in parallel with its other funding work, on the assumption that endowment-scale donor relationships take five to ten years to develop and that early founding investments in the relationships will produce returns only in the second decade of the field’s existence. The institutional advancement offices of the founding centers should be informed about the field’s endowment ambitions from the outset, and the centers’ directors should be expected to invest substantial time in the donor cultivation that endowment building requires.

7. A Research Agenda on the Funding System Itself

The field has an additional reason to attend to funding questions beyond the practical one. The funding system is itself one of the most important mechanisms by which scholarly attention is allocated, and the systematic study of how funding decisions shape research portfolios is a core topic for the field. The relevant questions include how grant-review processes produce conservatism, how funding-agency priority-setting interacts with the priorities of the scholars who serve on review panels, how the geographic and institutional distribution of grants affects subsequent attention to questions, and how the time horizons of funding programs affect the kinds of work that can be pursued.

A research agenda on the funding system is intellectually appropriate to the field, and the findings would inform the field’s own funding strategy in ways that purely strategic thinking cannot. The agenda should be pursued through partnerships with funding agencies that are willing to make their decision-making accessible to study, through bibliometric and archival work on the historical record of funding decisions, and through comparative studies of funding systems in different national contexts. The James Lind Alliance and the broader patient-and-public-involvement literature provide precedents for this kind of work in the specific context of health research funding,[^6] and the broader metascience literature has begun to develop parallel work for other funding contexts.

A specific recommendation is that one of the founding centers should host a research program on funding systems as a core part of its work, both because the program would contribute to the field’s substantive agenda and because the program’s findings would be directly useful for the field’s own strategy. The program should produce both peer-reviewed scholarship and practical outputs directed at funders — reports, briefings, advisory engagements — that translate the scholarship into terms funders can use.

8. Sequencing and Failure Modes

The funding strategy outlined above involves many parallel activities, and the question of sequencing matters less than the question of how the activities are coordinated. The proposal is that all the funding streams discussed in this paper should be pursued from the field’s earliest years, with the recognition that different streams will produce results on different time horizons and that the field’s resource base will shift in composition over time.

In the first three to five years, the field’s funding will likely come primarily from philanthropy, with smaller contributions from conventional grants secured by individual scholars working through established disciplinary channels. The founding centers will need to demonstrate productivity during this period, both because the productivity is the foundation for everything else and because the philanthropic funders will be evaluating their investment in the field as the early period progresses.

In years five to ten, the funding mix should diversify to include substantial public funding, secured through the strategies outlined above, and the beginnings of endowment funding from donor relationships that the founding centers have been cultivating. The journal, conference, and registry should be self-sustaining by this point, on funding models established at their launch.

In years ten to twenty, the field’s funding should rest on a diversified base in which no single source provides more than perhaps thirty percent of total funding, in which endowment income provides a stable foundation that protects against short-term shocks, and in which the funding mix includes substantial contributions from sources that did not exist or did not support the field in its founding years.

The failure modes that the strategy must anticipate are several. The first is the loss of an early major philanthropic supporter before alternative funding has been secured, which would force the field into rapid contraction at a stage when contraction is most damaging. The corrective is diversification from the outset, even when diversification involves accepting smaller commitments from multiple funders rather than larger commitments from one. The second is the capture of the field by a funder whose priorities the field must accommodate in ways that compromise its analytical independence. The corrective is institutional: the founding centers’ governance must include explicit protections for editorial and research independence, and the field’s scholars must be willing to refuse funding whose conditions would compromise the field’s value. The third is the persistent marginality of the field, in which it secures enough funding to survive but not enough to consolidate, leaving its scholars in precarious positions and its institutional infrastructure underdeveloped. The corrective is patience combined with strategic discipline: the field must be willing to develop slowly rather than to compromise the methodological and institutional standards on which its long-term value depends.

The deepest failure mode is one that the field shares with several adjacent enterprises in the broader landscape of research-reform work. A field whose mission is to examine the institutions that fund it can find that its mission has been domesticated, that its outputs have been incorporated into the funders’ standard practice in ways that drain the work of its critical edge, and that the field has become a service provider to the funders rather than an independent voice. The risk is real, and the field’s institutional design should include explicit safeguards against it. The safeguards are partly procedural — independent editorial governance, diversified funding, professional standards that reward uncomfortable findings — and partly cultural, in the sense that the field’s professional community must maintain a culture in which the conscience of the field is more important than its access to resources. The cultural maintenance is the harder of the two and the more important.

9. Conclusion

This paper has argued that the funding problem for neglect studies is hard but not unsolvable, that the solution requires a diversified strategy that does not depend on any single source, and that the strategy must include both pragmatic engagement with existing funding mechanisms and the longer-term work of building dedicated funding sources that the field controls. The paper has identified philanthropic foundations as the likely first movers, public funders as essential second-stage partners, endowments as the foundation for long-term stability, and a research agenda on funding systems as both intellectually appropriate to the field and strategically valuable for it.

The funding problem cannot be solved by argument alone. It requires the patient relationship-building, the production of credible outputs, the demonstration of useful contributions, and the political and institutional work that emerging fields have always required. The argument of this series is that the work is worth doing, that the obstacles are surmountable, and that the field’s eventual contributions to the scholarly enterprise and to public policy will justify the founding investment many times over. Whether the argument is correct depends on what the founding scholars and their funders actually do over the next decade.

Paper 5 takes up the data infrastructure that the field’s research will require.


Notes

[^1]: The literature on grant peer review and its biases is reviewed in Lee, Sugimoto, Zhang, and Cronin (2013) and in subsequent work. The specific findings on conservatism in grant review are developed in Boudreau, Guinan, Lakhani, and Riedl (2016) and in Nicholson and Ioannidis (2012).

[^2]: The Health Research Council of New Zealand’s explorer grants program and subsequent lottery-based programs are documented in Liu et al. (2020). The broader argument for lottery allocation is developed in Fang and Casadevall (2016) and in Avin (2019).

[^3]: The Volkswagen Foundation’s experimentation with alternative funding models, including elements of golden-ticket allocation, is documented in the foundation’s own program reports. The broader literature on alternatives to conventional peer review in funding includes Roumbanis (2019) on lottery and golden-ticket mechanisms.

[^4]: The literature on prize-based funding is partly scholarly and partly practical; Stine (2009) provides a Congressional Research Service overview of the U.S. government’s use of prizes, and Williams (2012) addresses the comparative advantages of prizes versus grants for research support.

[^5]: The U.S. NIH Director’s Pioneer Award has been evaluated in Azoulay, Graff Zivin, and Manso (2011) and in subsequent analyses. The broader literature on high-risk funding mechanisms is reviewed in Heinze (2008).

[^6]: The James Lind Alliance methodology and its application to research-funding priorities are documented in Cowan and Oliver (2021) and in Crowe, Fenton, Hall, Cowan, and Chalmers (2015). The broader literature on patient and public involvement in research funding is reviewed in Manafò, Petermann, Vandall-Walker, and Mason-Lai (2018).


References

Avin, S. (2019). Mavericks and lotteries. Studies in History and Philosophy of Science Part A, 76, 13–23.

Azoulay, P., Graff Zivin, J. S., & Manso, G. (2011). Incentives and creativity: Evidence from the academic life sciences. RAND Journal of Economics, 42(3), 527–554.

Boudreau, K. J., Guinan, E. C., Lakhani, K. R., & Riedl, C. (2016). Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62(10), 2765–2783.

Cowan, K., & Oliver, S. (2021). The James Lind Alliance guidebook (Version 10). James Lind Alliance.

Crowe, S., Fenton, M., Hall, M., Cowan, K., & Chalmers, I. (2015). Patients’, clinicians’ and the research communities’ priorities for treatment research: There is an important mismatch. Research Involvement and Engagement, 1, 2.

Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery. mBio, 7(2), e00422–16.

Heinze, T. (2008). How to sponsor ground-breaking research: A comparison of funding schemes. Science and Public Policy, 35(5), 302–318.

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17.

Liu, M., Choy, V., Clarke, P., Barnett, A., Blakely, T., & Pomeroy, L. (2020). The acceptability of using a lottery to allocate research funding: A survey of applicants. Research Integrity and Peer Review, 5, 3.

Manafò, E., Petermann, L., Vandall-Walker, V., & Mason-Lai, P. (2018). Patient and public engagement in priority setting: A systematic rapid review of the literature. PLOS ONE, 13(3), e0193579.

Nicholson, J. M., & Ioannidis, J. P. A. (2012). Conform and be funded. Nature, 492(7427), 34–36.

Roumbanis, L. (2019). Peer review or lottery? A critical analysis of two different forms of decision-making mechanisms for allocation of research grants. Science, Technology, & Human Values, 44(6), 994–1019.

Stine, D. D. (2009). Federally funded innovation inducement prizes. Congressional Research Service.

Williams, H. (2012). Innovation inducement prizes: Connecting research to policy. Journal of Policy Analysis and Management, 31(3), 752–776.


Posted in Musings | Tagged , , , | Leave a comment