The unreasonable effectiveness of shared null results:

or, if open science were Wordle we might usually get the answer on the first line.

Wordle is easy. Science is hard.

This is a blog that compares science (the open kind) to cheating at Wordle. But not in a bad way. This is a blog, so I’ll run the narrative first. I have included links to further readings from The Open Scientist Handbook. [OSH] You can find all the other literature references added at the end.

ASAPbio recently (October, 2022) announced a competition to share negative results as preprints. <https://asapbio.org/competition2022>. Sharing negative results is integral for open science to achieve its unreasonably effective potential. The sharing of all research products is one of open science’s main goals.

We can imagine, in an alternative present, an academic publishing endeavor that has long made space for null-results. Ideally, these results would be available in the same journals that publish positive results, and in the same proportion as these are each generated through rigorous scientific methods. Publication in this manner might fairly accurately reflect the sum of knowledge generated by research (if it included data, software, etc.).

Now, let’s look at the academic publishing regime we have today, where null-results are conspicuously absent, and the published corpus reveals a tiny fraction of the work of scientists across the globe. Sources on the topic of “publication bias” outline how this damages the entire academy. A further assortment of bad practices — of bad science — can also be uncovered through methodological and content reviews of published research. This is where Retraction Watch comes into play.

We can find at least two streams of perverse incentives in the current publication situation. The first is an outcome of the arbitrary scarcity of publication opportunities. This warps the whole research landscape and rewards narrowly selected research results, instead of valorizing methodological rigor. Even the available published work rarely includes enough information to allow replicating the research.

The second stream of bad science is the central role the current publication regime plays in career advancement and future funding. By using metrics that are hooked into “journal impact factors,” and other forms of pseudo prestige (e.g., the h-index), universities and funders get to pretend they can evaluate the merits of a researcher’s work without needing to spend the time and effort to make an actual qualitative review.

Apart from the weaknesses inherent in this metric, simply as a metric, when this metric becomes a goal for researchers, Goodhart’s Law predicts that this will be gamed until its original value is erased or even reversed. The unnecessary scarcity of publication opportunities creates an ersatz elite of “published” academics, and a much larger cohort of undervalued, marginalized researchers. [Fierce equality] (Perhaps another blog is needed to show how academy publication is like TikTok.)

OMG! I just got published in Science!

At the same time, the race to get published crowds out the open sharing of research results. The need to be first also prevents collaborative networked interactions with other teams that could greatly accelerate new knowledge discovery. When the great majority of the actual work of science has no place to be shared, the global work of science is fundamentally diminished.

The current availability of preprint servers for a number of disciplines (and also new AI search engines to facilitate discovery) means we are at the front edge of the capability to demonstrate how widespread, open sharing can decenter the current logic of scarcity, in favor of a new, extraordinary abundance.

Back to Wordle

I am going to use Wordle as an analogy that can show the unreasonable effectiveness of open sharing.

Find the right word…

For those of you who don’t Wordle, when you start a Wordle puzzle, you have six chances to uncover the correct five-letter word. Each layer provides information to help you out with the next layer.

Whew! just made it.

You build a solution space for the correct answer by knowing which letters are not used and which are used in the wrong place, or in the correct place.

That’s right. You get to use your own mistakes to learn and improve. In every new layer, the puzzle gets easier. There is also guesswork, and so a bit of luck involved. It’s an elegant design for a short puzzle. You can do one a day.

If Wordle had eight layers, most players would never lose. If Wordle only had two layers, most players would rarely win. Wordle works because its difficulty level is a sufficient challenge to most who play it.

One line is all you get…

But what if your job future depended on you filling in the correct word on the first level? What if you are given this puzzle and told you must guess correctly or find a new position?

You might be tempted to look online and find the answer. Everyone solves the same puzzle each day. Cheat to win? Why not? You can do “bad Wordle.” But then, someone would add a metric based on time, and only the first person who solves the puzzle gets job security. And on and on. The zero-sum-game solution.

However, unlike Wordle, the science puzzle in front of you has never been solved. That’s the whole point. You choose a significant unknown, because that is why you do science. You need to solve something new.

Currently, within the arbitrary scarcity of the publication regime, you need to solve this unknown now, because others are out there looking to solve the same, or related problems. Only the first solution will get published. It’s your lab team against all the others out there. (Of course this is an unnecessary competition, and a hallmark of failed science, but that’s another blog, comparing science with Survivor.) [Playing the game]

Science… only one research project will publish…

When you propose a research experiment, you only get one line: you have one chance to discover a result that explains something new. Nature (actual nature, not the journal) doesn’t give you a lot of hints, and the NSF has (finally) funded this one project for your team.

You are a scientist. You have subscribed to the hardest puzzle anywhere. Your job is to provide the answer to this puzzle. You have lined up all the resources you believe are sufficient. You have a proven methodology and a plan. Your team does the work. You have your results. Now you must publish your work.

science is a lot harder than Wordle… and you only get one line to solve.

Let’s say that getting an article into the academic journal of your choice (the one with a “high impact factor,” or whatever) today requires this:

Today, journals accept only a few research results. They brag about their rejection rate…

After finishing the project your actual research result, the information you found, may look like this:

This finding is as important as any…

Each bit of that finding is as valuable to science as any other finding. It just has no public home to go to. Today you have one sort-of-good — but actually unfortunate — choice, and other “choices” that are not good at all, and totally unfortunate for science.

The “goodish” choice is to keep the data safe, maybe host it up on a repository, and do the write-up for the granting agency. Use the lesson learned for the next research project. Your lab will add this research to its shared knowledge and move ahead. This is the “file-drawer” outcome.

You will wonder how this outcome will affect your ability to get future funding, and realize the lack of a publication might impact your next performance review. You might feel like all the work you accomplished was wasted. Yes, you found out something new, something important on its own terms, only this result was “negative.” It does not spell out a “significant finding” you can use to leverage your career.

Your research revealed a new piece of the larger truth. In terms of the knowledge space of your field, this new information occupies corner of the space of “already-accomplished-research.” It is not any less significant as a finding than any other research. It is another step in the long journey that is science. [Science and the infinite game] However, today, this work matters far less than it should to the academy. And less than it might for your job.

At this point, all of the perverse incentives of current science are now clearly in play if you let them infect what you do next. [Toxic incentives] Perhaps, in some desperation, you go back to the data and revise your hypothesis to match the “findings.”

Maybe you set out to prove that “people who eat their largest meal in the morning gain less weight,” but now your research proves — with great statistical precision — that “people do not eat while they are asleep.”

You shop this “finding” around to journals and one of them publishes it. It is no longer research you are proud of, but your list of publications is larger, and your funder might not notice.

Digging around the data you uncovered something you can show off…

Maybe you cannot find a different hypothesis, so you go back to the data and “regularize” this until some significant pattern pops up. You announce this as a “finding” and shop it to journals. Your lab team will need to be in on this move. They are implicated in your deceit. You figure that nobody will get funded to replicate your work, so this finding will be accepted as legitimate.

You have proven nothing here, but your desperation to save your own career.

Congratulations, your published work will mislead everyone who cites it. You have wounded the body of knowledge you and your colleagues share. Your career now means more to you than the integrity of your work.

Open science can fix this. It must, and it will.

Here is where open science can help. Let us imagine that the academy has promoted publishing null-results on preprint and eprint servers for a decade. With no need for the “file-drawer” option, the number of null-result findings available online is now much larger than the number of recent significant findings. (Note: I use the term “eprint” the signify new publishing efforts that publish all submissions and then do open peer review to add value to these.)

Because nobody needs to contort their research to get published, the actual research statistics being used are much more rigorous, and the data more reusable and available. Let’s add here that a null-result pre/eprint that gets cited is treated the same as any other publication, in terms of career-building metrics. That’s another goal of open science: new institutional cultural practices and norms.

Back to square one… you still need to do the research

Open science still means you are facing the same complex problems

You have just received notice that your funding has been approved. You are still faced with a complex phenomenon to explain, just as before.

Of course, you have already done a complete literature search through all the appropriate journals to see if there are positive findings that would improve the questions you have, and the final hypothesis you will be using. Your research did not end with the already-published positive findings. Before you even wrote you proposal, you expanded your research (using powerful AI-enhanced search routines, and advanced keyword techniques) to include negative-results in related experiments. These are mostly posted on preprint servers.

This is what you discovered: A colleague of yours in Germany did a research project closely related to your work, and this was their team’s result:

You found this on a preprint server…

Another colleague in China also put their negative result up on a preprint server:

This looks very interesting to your team…

A post-graduate researcher in California submitted their research findings to a preprint server:

Unexpected but really valuable

Your team sifts through these findings and their open data. You use protocols developed to match other findings with the phenomenon you are investigating.

You are encouraged when you realize that instead of just this:

The shared resource of null-results has filled in many of the unknowns internal to the unknown you are tackling. You now know so much more about the object/process under study:

You have a better handle on the problem you face. Your team can focus its methods on only those parts that are missing. The rest of the puzzle offers new clues to its solution. You really only need one line to figure this out.

The shared findings occupy much of the original unknown space

You now have a much better starting point, a major advantage, from which to discover something that completes a bit of the landscape of current science knowledge.

and you do not need to do this alone… and there is no race to win (see: R.E. Martin [1998])

Your agency’s program manager is jazzed. Your university puts out a press release. You are networking with new collaborators across the globe, planning the next project together. Of course, you cite the research of all of your sources in your publication. Their shared research products made your findings possible. [Demand Sharing] You add your data to theirs on an open repository. And you pop the resulting publication onto an open, online server.

Science is hard enough. Let’s work in our universities, agencies, and societies to promote the added, unreasonably effective, benefits of open sharing and collaboration.

Certainly the open data you discover from the null-research results cannot be expected to be quite so providential for your work. But these shared resources will offer an abundance of new information and helpful guidance for your own efforts. You are not alone. You don’t have to race. There is no race to win. Your lab has posted seventeen prior research results with data — all of them negative results — up on the web. Your grad students field requests for these data and collect citations for this work. They are making connections across the planet that should enhance their future careers. Curiously, without the race, science moves a lot faster.

A hundred-thousand science research teams working apart, each one of them looking to “win science” by keeping their work secret, would fail constantly against a hundred teams working in concert. [Open Collaboration Networks] The latter gain insights and save time by sharing all of their work toward a common goal of collective understanding.

The unreasonable effectiveness of shared null results is just one example of how embracing abundance instead of scarcity accelerates science knowledge discovery.

CODA: Free riders on the sharing-null-results bus

There is a “what’s wrong with this picture” perspective we can clear up, even if we don’t have an optimal solution space (that space will need to be emergent). Any move from a zero-sum game (e.g., science today) to a non-zero-sum game, allows a few zero-sum game players — those who don’t mind violating cultural norms for their own advantage — to add the shared non-zero-sum assets to their own work, without attribution, and potentially compete more efficiently than before. This is your basic “free-rider” problem. Every commons faces this problem.

Looking at this another way, the free-rider problem becomes a free-rider opportunity within the academy, as long as the cultural norms for sharing are present. [Share like a scientist] Every scientist is a “free-rider” on the discoveries they use in their own research. The real free-rider problem happens when open resources are acquired freely and aggregated by corporations, which want to sell these back to the academy as proprietary property, with some marginal value-added service.

Free-riding is a problem that culture change can help resolve. Yes, there will be those who grab these assets and use them without credit, or massage these and market them. The general strategy for jerks, those who take advantage of a positive cultural change that valorizes sharing, is to marginalize them wherever possible. Academic institutions can cultivate social outrage against those who plagiarize others’ work, including null-results. Agencies can fund open repositories, and require their use. Open means really open. Closed, as John Wilbanks reminded us, means broken.

Additional readings and quotes from them

Bibliographic citations here

On publishing not capturing what science knows, and what reuse requires:

“In present research practice, openness occurs almost entirely through a single mechanism — the journal article. Buckheit and Donoho (1995) suggested that ‘a scientific publication is not the scholarship itself, it is merely advertising of the scholarship’ to emphasize how much of the actual research is opaque to readers. For the objective of knowledge accumulation, the benefits of openness are substantial…

Three areas of scientific practice — data, methods and tools, and workflow — are largely closed in present scientific practices. Increasing openness in each of them would substantially improve scientific progress.”

Nosek, Spies, and Motyl (2012); Buckheit and Donoho (1995)

On publication bias:

“Publication bias is a common theme in the history of science, and it still remains an issue. This is encapsulated in a piece of commentary published in Nature: ‘…negative findings are still a low priority for publication, so we need to find ways to make publishing them more attractive’ (O’Hara, 2011). Negative findings can have positive outcomes, and positive results do not equate to productive science. A reader commented online in response to the points raised by O’Hara: ‘Imagine a meticulously edited, online-only journal publishing negative results of the highest quality with controversial or paradigm-shifting impact. Nature Negatives’ (O’Hara, 2011). Negative results are considered to be taboo, but they can still have extensive implications that are worthy of publication and, as such, real clinical relevance that can be translated to other related research fields.”

Matosin, et al (2014); O’Hara (2011)

On impact factors and the h-index:

“Funders must also play a leading role in changing academic culture with respect to how the game is played. First and foremost, funders have a clear role in setting professional and ethical standards. For example, they can outline the appropriate standards in the treatment of colleagues and students with respect to such difficult questions as what warrants authorship and how to determine its ordering. Granting agencies should clearly emphasize the importance of quality and send a clear message that indices should not be used, as expressed by DORA, which many agencies have endorsed. Of particular importance is for funders not to monetize research outputs based on metrics, such as the h-index or journal impact factor.”

Chapman, et al (2019)

On Goodhart’s Law:

“The goal of measuring scientific productivity has given rise to quantitative performance metrics, including publication count, citations, combined citation-publication counts (e.g., h-index), journal impact factors (JIF), total research dollars, and total patents. These quantitative metrics now dominate decision-making in faculty hiring, promotion and tenure, awards, and funding.… Because these measures are subject to manipulation, they are doomed to become misleading and even counterproductive, according to Goodhart’s Law, which states that ‘’when a measure becomes a target, it ceases to be a good measure’”.

Edwards and Roy (2016)

On the file-drawer problem:

“For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the ‘file drawer problem’ is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show non-significant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.”

Rosenthal (1979)

On Science and the Infinite Game:

“The paradox of infinite play is that the players desire to continue the play in others. The paradox is precisely that they play only when others go on with the game. Infinite players play best when they become least necessary to the continuation of play. It is for this reason they play as mortals. The joyfulness of infinite play, its laughter, lies in learning to start something we cannot finish”

Carse (1987).

On the free-rider problem:

“But here’s the thing. In addition to the free rider problem, which we should solve as best we can, there’s a free rider opportunity. And while we whine about the problem, the opportunity has always been far larger and its value grows with every passing day.

The American economist Robert Solow demonstrated in the 1950s that nearly all of the productivity growth in history — particularly our rise from subsistence to affluence since the industrial revolution — was a result not of increasing capital investment, but of people finding better ways of working and playing, and then being copied.”

Gruen: We’re All Free Riders. Get over It!: Public goods of the twenty-first century

Open science organizations need to achieve the status of a Zero-Asshole Zone

Original found on Reddit

“First: the asshole helps himself to special privileges in cooperative life; second: he does this out of an intrenched sense of entitlement; third: he is immunized against the complaints of other people.” Aaron James: Assholes: a Theory… the intro video (2012) <https://youtu.be/d2y-pt0makw>

“An asshole is someone who leaves us feeling demeaned, de-energized, disrespected, and/or oppressed. In other words, someone who makes you feel like dirt” (Robert Sutton, 2019; from <https://www.vox.com/conversations/2017/9/26/16345476/stanford-psychologist-art-of-avoiding-assholes> retrieved April 9, 2019).

Open science organizations need to achieve the status of a Zero-Asshole Zone

Depending on who you talk to, the academy’s asshole problem is either extremely dire, vastly complicated, or both. Very few people would say it doesn’t exist. The “complicated” version tries to balance assholish behaviors with some idea that the pursuit of new knowledge in a hyper-competitive environment requires a intellectual with an enhanced sense of self-confidence, an enormous ego, a thick skin, and relentless drive. Only a complete narcissist can out-compete all the other assholes in the struggle for resources and credit. Colleagues who hang around this social-black-hole personality hope to ride along in the car of his success (i.e., “He may well be an asshole, but he’s our asshole”):

“The traits associated with narcissism explain why some people have an innate ability to dominate the scene. This includes the good serious face that implicitly tells their entourage that their research is important but also their willingness to use resources without any scruples or any sense of a possible cost for the community as a whole. This provides advantages in a system that monitors production and not productivity. We can understand why these innate leaders have supporters that praise their qualities — because of their fast-track access to resources that are usually difficult to get” (Lemaitre, 2015).

This “nice scientists finish last” mind-set serves to demonstrate why open science, with fierce equality and demand sharing, is an important, and urgent, remedy for the academy. The “cost” to the community — and to your own team, lab, department, or school — of even one real asshole is greater than you might at first guess. Assholes breed more assholes as they chase away nice, clever people. “Ultimately we are all diminished when clever people walk away from academia. So what can we do? It’s tempting to point the finger at senior academics for creating a poor workplace culture, but I’ve experienced this behaviour from people at all levels of the academic hierarchy. We need to work together to break the circle of nastiness” (emphasis in the original) (Mewburn, 2015). <https://sasconfidential.com/2015/11/09/niceness/> retrieved April 9, 2019.

You can argue ideas without being an asshole

It is really important here to understand that arguments over ideas are not intrinsically assholish events. As we will see below, assholes demean other individuals; their behavior is aimed at people. They will also be abrasive and demeaning in the manner in which they defend their ideas. We’ve all witnessed this in conferences and seminars. Entire paragraphs of meeting “code of conduct” rules are meant to counteract this kind of behavior. Sutton offers this: “enforcing a no asshole rule doesn’t mean turning your organization into a paradise for conflict-averse wimps. The best groups and organizations — especially the most creative ones — are places where people know how to fight” (Sutton, 2007).

Another complication the academy has is this: assholes with tenure. Sutton has no clear answer for this problem: “‘I’m with all these colleagues that are all tenured, and Stanford has no mandatory retirement,’ he points out. ‘So when I’m with an asshole, all I can do is hope’” (Sutton, 2017 <http://nymag.com/daily/intelligencer/2017/09/robert-sutton-asshole-survival-guide.html> Retrieved 4/9/2019; emphasis in the original).

“Science advances one funeral at a time;” Max Planck (1932/2015) had other, grand theoretical, reasons to say this. It also applies to assholes with tenure. So the best thing to do is: never hire an asshole in the first place. This is the essential message of the No Asshole Rule. No matter how much of an academic star she/he might be, adding him/her to your faculty is a huge mistake, even more so when they show up already with tenure.

In a corporate environment, you can just ask a high-powered jerk employee to go be a jerk in some other corporation. CEO coaches offer a simple principle: “‘genuine collaboration and accountability for our own actions are non-negotiable if you plan on succeeding in this place’. … Get this right [as a CEO], and you will set yourself up with a culture that delivers far greater and more consistent long term success than the short term spikes delivered by a Jerk!” (Francis, 2017 <https://www.linkedin.com/pulse/high-performing-jerks-culture-crushers-matthew-francis/> retrieved April 9, 2019).

Assholes in positions of power in your organization can be sidetracked as much as possible, isolated and ignored as circumstances allow. Graduate students can be warned away, administrators can be informed, and professional associations — where these assholes are eager to get into leadership positions — can be immunized through active word-of-mouth. Remember that a single asshole can impact your organization for years.

In The Problem with Assholes, Elizabeth Cullen Dunn announced that “Anthropology has an asshole problem.” She notes, “[a]ssholery is contagious. Once people see an asshole being an asshole and winning, actually gaining power and prestige by being an obnoxious self-interested bully, it creates a huge incentive for other people to emulate that behavior. Assholery has ripple effects as it spreads in the form of disciplinary norms that not only enable, but hyper-value nasty, elitist, demeaning behavior” (Dunn, 2018 <http://publicanthropologist.cmi.no/2018/06/20/the-problem-with-assholes/> retrieved April 9, 2019). Anthropology is not alone. In a 2018 report, the National Academies note: “In a survey conducted by the University of Texas System…, about 20 percent of female science students (undergraduate and graduate) experienced sexual harassment from faculty or staff, while more than a quarter of female engineering students and greater than 40 percent of medical students experienced sexual harassment from faculty or staff” (NAS, 2018). The asshole problem is acute across the academy.

Situational assholes

Sutton (2018 and 2007) notes that, on occasion, anyone can act assholishly. These “temporary assholes” are not the real problem. They tend to want to repair their lapses of civility, and to feel bad about their own behavior. The real problem comes from “authentic assholes.” A little later in this handbook we will talk about “dark” and “bright” core behaviors, (See: The bright and the dark) [NOTE: you are reading a draft of an essay in the Open Scientist Handbook, currently under construction]. This will allow us to unpack assholity into a small set of traits that can either be learned, or that display lasting personality disorders. Authentic assholes are also more likely to engage in an “exploitative sexual style” (Jones and Figueredo, 2013) that seeks instrumental sex with multiple partners; a trait that powers workplace harassment.

There is also a subset of assholes in the academy who are “accidental assholes” (Sepah, 2017 <https://medium.com/s/company-culture/your-company-culture-is-who-you-hire-fire-promote-part-2-anatomy-of-an-asshole-dba4f801b9f5> retrieved April 9, 2019). These are nerdish individuals who are, for example, on the autism spectrum, and who do not have the social skills to always act appropriately. They may do randomly assholish things, or they may simply copy the bad behaviors they find around them.

Not all assholes are born that way: lots of them are nurtured into bad behaviors on the job. The current, toxic academic culture can turn a temporary asshole into an chronic bad actor, a kind of “opportune asshole;” (or, in evolutionary culture terms, an “adaptive asshole”): someone who believes that bad behavior is expected of them and rewarded by their peers. They are happy to oblige.

This may be why so many precincts of the academy seem to be swarming with assholes (jerks, bad-actors, etc.). When you add the opportune- and temporary-assholes to the authentic ones, the numbers and their bad effects really add up. Sutton addressed this situation in an article in the Harvard Business Review (<https://hbr.org/2007/05/why-are-there-so-many> retrieved April 9, 2019). As the National Academies found, the most asshole-infested profession is medicine and medical school:

“A longitudinal study of nearly 3,000 medical students from 16 medical schools was just published in The British Medical Journal. Erica Frank and her colleagues at the Emory Medical School found that 42 percent of seniors reported being harassed by fellow students, professors, physicians, or patients; 84 percent reported they had been belittled and 40 percent reported being both harassed and belittled” (Sutton, 2007).

So, why are we surrounded by assholes? Sutton explains:

“The truth is that assholes breed like rabbits. Their poison quickly infects others; even worse, if you let them make hiring decisions, they will start cloning themselves. Once people believe that they can get away with treating others with contempt or, worse yet, believe they will be praised and rewarded for it, a reign of psychological terror can spread throughout your organization that is damn hard to stop” (Sutton, 2007).

The who is more important than the what

The good news is that the principles of fierce equality and demand sharing are diagnostic and therapeutic in finding and neutralizing assholes. Once the opportune-assholes find that their bad behavior is no longer applauded or even acceptable, they will need to self-monitor their personal interactions. When open-science norms support public acknowledgement of the asshole problem, and offer remedies for this in departments, labs, colleges, professional associations, etc.; authentic assholes will find that their toxic actions serve only to isolate and shame them (even though they may not feel this shame). Over time, when new norms take hold, and new hires bring fresh non-assholic voices into the mix, your corner of the academy can regain its fundamental civility, and you and your students can again argue theories and ideas, methods and experiments, without resorting to abuse and fear.

Working in a zero-asshole environment is significantly more pleasant and productive than toiling in the psychological minefield that even one asshole can create in your department, laboratory, agency, or college. Achieving a zero-asshole status takes a principled stance and procedural follow-through. It is a worthwhile goal for you as an open science culture-change agent to pursue. “Bear in mind that negative interactions have five times the effect on mood than positive interactions — it takes a lot of good people to make up for the damage done by just a few demeaning jerks” (Sutton, 2007).

The asshole in the mirror

A final thought here. Each of us is capable of astounding assholishness at any time. Most of us have experienced being on the receiving end on some occasions (in seminars, through peer review, at office hours) of abuse by those who control our academic fortunes, and use fear and humiliation in their critiques of our work, or of our capacities for research or teaching. We know how to do asshole; we’ve have enough training. We just need to not go there. And we need to isolate ourselves from the assholes we encounter. Sutton reminds us of this:

“If you want to build an asshole-free environment, you’ve got to start by looking in the mirror. When have you been an asshole? When have you caught and spread this contagious disease? What can you do, or what have you done, to keep your inner asshole from firing away at others? The most powerful single step you can take is to…just stay away from nasty people and places. This means you must defy the temptation to work with a swarm of assholes, regardless of a job’s other perks and charms. It also means that if you make this mistake, get out as fast as you can. And remember, as my student Dave Sanford taught me, that admitting you’re an asshole is the first step” (Sutton, 2007).

You can always take Sutton’s (2007) “asshole test” to self-diagnose. Or, if you find yourself believing that you are surrounded by idiots and that you should be recognized for your real talents and elevated into a higher level of society: you are probably an asshole, or at least, a “jerk”:

“Because the jerk tends to disregard the perspectives of those below him in the hierarchy, he often has little idea how he appears to them. This leads to hypocrisies. He might rage against the smallest typo in a student’s or secretary’s document, while producing a torrent of errors himself; it just wouldn’t occur to him to apply the same standards to himself. He might insist on promptness, while always running late. He might freely reprimand other people, expecting them to take it with good grace, while any complaints directed against him earn his eternal enmity” (Schwitzgabel, 2014 <https://aeon.co/essays/so-you-re-surrounded-by-idiots-guess-who-the-real-jerk-is> retrieved April 9, 2019).

This is a good reminder that assholes know who and what to kiss to get ahead. They may direct their assholocity at anyone/everyone equal or lesser than them in the academic scheme, and act entirely respectful and encouraging to those above them. Your dean may not know who’s an asshole, but grad-students might have a clear idea. Listen to them. And should you, in a moment of fatigue or stress lash out at your students, if you are a temporary asshole, then it’s up to you to make them know you acted poorly and regret it.

Feeling mean today? Go ahead, be mean to your data; interrogate it ruthlessly. Be cruel to your theories. Don’t look to validate them, find new ways to attack them. Be an asshole with your methodology; it’s certainly not as rigorous as it could be. Then, have some more coffee and be kind and humble with your students and colleagues.

References

Jones, Daniel Nelson, and Aurelio Jose Figueredo. “The Core of Darkness: Uncovering the Heart of the Dark Triad: The Core of Darkness.” European Journal of Personality 27, no. 6 (November 2013): 521–31. https://doi.org/10.1002/per.1893.

Lemaitre, Bruno. An Essay on Science and Narcissism: How Do High-Ego Personalities Drive Research in Life Sciences? Bruno Lemaitre, 2015.

NAS: Committee on the Impacts of Sexual Harassment in Academia, Committee on Women in Science, Engineering, and Medicine, Policy and Global Affairs, and National Academies of Sciences, Engineering, and Medicine. Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine. Edited by Paula A. Johnson, Sheila E. Widnall, and Frazier F. Benya. Washington, D.C.: National Academies Press, 2018. https://doi.org/10.17226/24994.

Planck, Max, Albert Einstein, and James Murphy. Where Is Science Going?, 1932.

Sutton, R.I. The No Asshole Rule: Building a Civilized Workplace and Surviving One That Isn’t. Hachette UK, 2007.

Things about science (that you may have not considered yet)

 

SUPER_Sq

Photo Credit: Tom Hilton on Flickr

Science

THIS IS a draft of an introductory essay for the Open Scientist Handbook… I would love to know if it’s going in an interesting direction.

There are books and libraries of books that talk about science: its history, sociology, philosophy, politics, and practice. As a scientist, you’ve likely gotten this far in life without reading any of these. You probably don’t need to start now. In this essay, a few remarks about science will help anchor the (still being written) Open Scientist Handbook into a particular framework for science as a project, as an endeavor, and a lifeway.

You are already a scientist, so you don’t need a general introduction to “science.” Also, you can learn everything you need about open science as a practice by checking out the Open Science MOOC.

WHEN THE HANDBOOK is done, this essay will have live-links into several other essays/sections in the book that you can explore if you wish, when it’s convenient. (NOTE: This handbook follows the “mullet” logic: all the great stuff up front, and the ragged details in the back.) Here you will find several Richard Feynman quotes. Do you want a good example of an open scientist? Be like Richard Feynman (who died before open science became a meme):

Feynman quote (still looking for the source):
“Physics is like sex: sure, it may give some practical results, but that’s not why we do it.”

Richard Feynman (from Wikimedia)

Science plays an infinite game because nature is the infinite game.

“If it turns out it’s like an onion with millions of layers and we’re just sick and tired of looking at the layers, then that’s the way it is, but whatever way it comes out, its nature is there and she’s going to come out the way she is, and therefore when we go to investigate it we shouldn’t pre-decide what it is we’re trying to do except to try to find out more about it” (Feynman et al, 2005).

Nature is not entirely knowable; for very good reasons, including its emergent, adaptive complexity, and our embedded place within it. Not yet knowing all about nature is why science still exists. Nature not ever being knowable is the scientist’s best job security.

Nature is a great part of what James P. Carse (1987) called the “infinite game.” By studying nature, scientists get to be players in/with this infinite game. Not many humans get to do this for a living, but all of us do this because we are alive. When we stop breathing, the infinite game goes on without us.

Carse has a list of distinctions between “finite” and “infinite” games. Francis Kane’s New York Times (04/12/1987) review of Carse’s book says:

“Finite games are those instrumental activities — from sports to politics to wars — in which the participants obey rules, recognize boundaries and announce winners and losers. The infinite game — there is only one — includes any authentic interaction, from touching to culture, that changes rules, plays with boundaries and exists solely for the purpose of continuing the game. A finite player seeks power; the infinite one displays self-sufficient strength. Finite games are theatrical, necessitating an audience; infinite ones are dramatic, involving participants.”

The point of playing the infinite game is to keep playing, to learn how to play better, and to add players to the mix; to sustain the game and the knowledge required to play this at its highest levels; to change the rules not to cheat, but to evolve and explore.

The infinite game goes on even when humans are distracted by the finite games they make up to give themselves victories to distinguish their efforts. The academy can choose to invest in playing the infinite game, or it can get distracted by finite games of manufactured scarcity, ersatz excellence, and accumulated advantage. This is where we are and the choice we need to consider.

Because nature is intimate with the infinite game, science cannot avoid playing this. Biological evolution, for example, is a theory that describes some of the adaptive and emergent possibilities of the infinite game. There is no end-point to evolution; no species really wins, some of them just have the chance to keep on playing. In fact, species extinction has a general positive effect¹ on the robustness of the ecosystem.

Complexity theories for the academy

Playing the infinite game is an intrinsically complex knowledge-management endeavor. Recent organizational management theories, such as the Cynefin Framework (started at IBM), warn that there are no “best practices” to deal with the “wicked problems” of adaptive complexity. This warning includes not just the marketplace, but also nature and culture. It turns out we are surrounded by emergent forces, and 20th Century management techniques are not up to the task.

While science methods have been addressing nature’s complexity for centuries, science knowledge-management and organizational governance have not kept up. It’s not hard to imagine science as an early-enlightenment project housed in late-medieval organizations. Open science looks to bring science governance into the 21st Century.

A bit on governance

This is an essay on science, not governance. Many of the sections of the Handbook offer governance guidance. Here it is only important to relate a couple major ideas.

First: your organization’s governance needs to be playing the infinite game. If your department, university, or research lab is still talking about “excellence,” or “we are ranked # X!,” or “the average salary of our graduates is Y$,” you are playing finite games. You need to stop that. You need to build infinite-game governance. Open science is here to help.

Second: organizations that play finite games against others playing the infinite game will always lose. The infinite game is a “long game.” Its players don’t care what other organizations are doing. They play to get better, not to win. Over time, they will out-innovate, out-think, and out-knowledge any peer who is chasing short-term finite wins.

Third: science is already positioned to play the infinite game; it gets funding from society (science goods are public goods); it holds a long-term privileged status within society; its “foe” (nature) is formidable and pushes science to ever greater tasks; its plan is flexible, it will reinvent itself as needed; its goal is just and grand: sharable knowledge of the universe.

To play the infinite game, however, science, and your workplace, needs one more thing: it needs you, and others like you, to step up and lead. You might want to take a look at the section (being written) Leadership in the Infinite Game to discover how you can lead your team, your lab, your school, or your agency in the infinite game.

Science has never been winnable. Nobody gets to figure everything out and finish science. Every bit of new knowledge is inextricably bound with a whole lot of other bits. It is a great example of the “long game.” Likewise, any bit of learning, every insightful thought or sentence delivered in your lecture, is fully dependent on a history filled with a whole lot of other learning moments: all of which turn out to be equally fallible.

Science wallows in doubt, devours unknowns, and shits little turds of incomplete knowledge

“When Socrates taught his students, he didn’t try to stuff them full of knowledge. Instead, he sought to fill them with aporia: with a sense of doubt, perplexity, and awe in the face of the complexity and contradictions of the world. If we are unable to embrace our fallibility, we lose out on that kind of doubt” (Schultz, 2011).

Science looks squarely into the unknown. A scientist is never as interested in the work she has already published as she is in the next unknown she is tackling in her research. Science’s knowledge-mignardises (or petit fours: sounds better than turds) can and have accumulated into important and useful — but still incomplete — facts and theories about our world and ourselves. And only science can do this.

Science is a “world-building” exercise; it strives to explain every-thing it contacts. There is no alternative world out there.² There are strands of complementary knowledges or untested theoretics that could use some investigation; there are “pseudo-sciences” like Astrology; but there is no alt-science world, not even in Reddit (we checked in March of 2019). The placebo effect shows we have a lot to learn about the healing process, but does not invalidate what we know.

The main adversary to science is bad science; open science looks to remove the (perverse) incentives behind most of today’s shaky research methods and results:

“[I]n science… it is precisely when people work with no goal other than that of attracting a better job, or getting tenure or higher rank, that one finds specious and trivial research, not contributions to knowledge. When there is a marked competition for jobs and money, when such supposedly secondary goals become primary, more and more scientists will be pulled into the race to hurry ‘original’ work into print, no matter how extraneous to the wider goals of the community” (Hyde, 2009).

Science rests on the possibility that everything it knows today is wrong. As Feynman noted: “Once you start doubting, just like you’re supposed to doubt, you ask me if the science is true. You say no, we don’t know what’s true, we’re trying to find out and everything is possibly wrong” (2005). Kathryn Schultz wrote an entire book on Being Wrong; science has a central spot in this work:

“In fact, not only can any given theory be proven wrong… sooner or later, it probably will be. And when it is, the occasion will mark the success of science, not its failure. This was the pivotal insight of the Scientific Revolution: that the advancement of knowledge depends on current theories collapsing in the face of new insights and discoveries. In this model of progress, errors do not lead us away from the truth. Instead, they edge us incrementally toward it” (Schultz, 2011).

Science makes no claim to be right, but every claim to be the go-to method that can find out if something is wrong. From there, it harvests knowledge that has not (yet) been shown to be wrong; this is as close to being right/true as there is. And scientists get to have fun by being less-wrong today than yesterday. Scientists are passionate knowledge explorers.

The joy of discovery needs a home in the center of science

“Another value of science is the fun called intellectual enjoyment which some people get from reading and learning and thinking about it, and which others get from working in it. This is a very real and important point and one which is not considered enough by those who tell us it is our social responsibility to reflect on the impact of science on society” (Feynman et al, 2005).

Science is hard. It is the hardest ongoing task in all of humanity: after child rearing. One might expect society to honor, celebrate, and reward scientists for their labor. In the (not yet complete) Section on Joy and Passion you can discover more about how much fun you might be having right now as an open scientist.

For now, just consider that time spent playing with/in the infinite game can be intrinsically rewarding. In fact, it is potentially the most fun anyone can have. There is no video game, extreme sport, puzzle, quiz, theatre experience, or physical thrill that can compete with those moments you expand the edge of the planet’s knowledge envelope.

It is a privilege to be paid to spend your time in this pursuit. The privilege may not come with the type of salary/lifestyle society offers other occupations, but it does come with the freedom and the time to explore your own interests in nature/culture and the universe. This may be the best reason to keep the academy away from the logic of the marketplace, where freedom and time belong to others, where finite games fill your days and take you away from the very serious task of playing with nature.

Looking for the next patent, weapon design, or mass-consumable gadget or drug might make you rich, but it’s not science.

Using science resources and funding for science to accomplish these things, and their like, fits extremely well into the neoliberal logic of the marketplace. The incentives and rewards are nicely lined up. These finite games have obvious winners, and lots of losers too. Here is where the Matthew effecttranslates into cash rewards. Nearly all the current incentives for/in the academy have perverse consequences, including patents (See: Against Patents in the Academy[to be finished]). Marketplace counter-norms have already won, so it seems. Your “Research Excellence Framework” score matters a lot more than the actual new knowledge you and your colleagues have assembled.

This is why open science looks to build internal economies with its own logic, norms, principles, and rewards. There are lots of ways to be rich without much money; one key here is to manage your own expectations. Having “few needs, easily met” lets you locate a range of opportunities you might have overlooked. Here you might want to remember that open science is not just about publication access, it is about refactoring the academy to eliminate the sources for bad science, to accelerate the sharing of science objects across the planet, and to reboot the cultural DNA of academic organizations.

People will ask you, “how do you incentivize scientists to do the right thing […when the wrong thing pays off so well]?” You might respond by saying something like:

“How about giving scientists the means to do exceptional work, to have this work shared across the planet, to gather instant feedback from peers around the world, to live simply with plenty of time to do research without racing for funding, to have security of income and access to research tools.”

Time to do what you are passionate about is a great luxury, and has been for centuries. Setting your own goals, choosing yourself as the person who can contribute and accomplish great work, mentoring others to secure the future of science: these are incentives you can own.

Being a scientist is…

“Feynman always said that he did physics not for the glory or for awards and prizes but for the fun of it, for the sheer pleasure of finding out how the world works, what makes it tick” (Feynman et al, 2005).

At this point you might be thinking that the science described in this framework is not what you wake up and do every day. Your life may be dominated by demands from your organization for high productivity scores, funded research proposals, and publications in high impact journals; editors nudging you for your peer reviews; assistant vice chancellors pestering you with patent forms to fill out; constant rejections (curse you, reviewer three!) and revisions in your own output; courses to teach, lectures to prepare, and grades to give; and, right… home life. All this talk about joy and fun may seem oblique to your actual life.

Have hope. The high-pressure, low-fun career for scientists is not what science needs, and not how it was (and perhaps will not be again soon) designed to operate. Some decades ago, science was still considered a pursuit done best outside of the marketplace:

“[Vannevar] Bush convened a panel of leading academics to formulate a vision for postwar science policy. In July 1945, the panel produced a 192-page document dramatically titled Science: The Endless Frontier. Heralding basic science as the ‘seed corn’ for all future technological advancement, the report laid out a blueprint for an unprecedented union between government and academia — a national policy aimed at fostering open-ended blue-sky research on a massive scale. Though he was a conservative, Bush laid a groundwork for what Linda Marsa aptly termed a ‘New Deal for science,’ seeking to preserve a realm where university research was performed free of market dictates.

’It is chiefly in these [academic] institutions that scientists may work in an atmosphere which is relatively free from adverse pressure of convention, prejudice, or commercial necessity,’ wrote Bush in Endless Frontier, ‘Industry is generally inhibited by preconceived goals, by its own clearly defined standards, and by the constant pressure of commercial necessity.’ Of course there are exceptions, he acknowledged, ‘but even in such cases it is rarely possible to match the universities in respect to the freedom which is so important to scientific discovery’” (Washburn, 2008).

This freedom is what you’ve lost; what open science is determined to regain. You can find a lot of discussions around “academic freedom.” Being a scientist carries a great responsibility to maintain a specific variety of this. Again, here’s Feynman:

“It is our responsibility as scientists, knowing…the great progress that is the fruit of freedom of thought, to proclaim the value of this freedom, to teach how doubt is not to be feared but welcomed and discussed, and to demand this freedom as our duty to all coming generations” (Feynman et al, 2005).

This “freedom of thought” extends to ideas shared freely within the academic community as gifts from scientists to the entire community. Hyde notes that this “gift” logic runs counter to the logic of the marketplace:

“A gift community puts certain constraints on its members, yes, but these constraints assure the freedom of the gift. ‘Academic freedom,’ as the term is used in the debate over commercial science, refers to the freedom of ideas, not to the freedom of individuals. Or perhaps we should say that it refers to the freedom of individuals to have their ideas treated as gifts contributed to the group mind and therefore the freedom to participate in that mind” (Hyde, 2009).

Being a scientist means giving what you learn, the best you have, to your peers in a sharing community, with the expectation that they will do the same. It is beneficial to remember that when your mother or grandfather was doing science, the academy’s position as external to the marketplace was valorized and celebrated. Being a scientist means you can demand the freedom, the time, and the resources to investigate your part of the infinite game: the object of your own study and your singular passion and potential joy.

“There can be occasions when we suddenly and involuntarily find ourselves loving the natural world with a startling intensity, in a burst of emotion which we may not fully understand, and the only word that seems to me to be appropriate for this feeling is joy” (McCarthy, 2015; see also: https://www.brainpickings.org/2018/06/07/michael-mccarthy-the-moth-snowstorm-nature-joy/).

Doing science is…

Science is the most difficult, most ambitious, most challenging pursuit that the human species has ever attempted. Every unknown is integrally linked to the entire infinite game that is the universe in which we swim. So your unknown — that bit of the game you have chosen to interrogate — is just as important as the next bit. Tackling your unknown is difficult by default (if it wasn’t this would already be a “known”). What is really painful is not being in constant, constructive contact with the five, or twelve, or a hundred other scientists somewhere on the planet who are, at this moment, running the exact same thoughts through their minds as you hold in yours.

Open science means you no longer need to consider these colleagues as your “competition.” A goal of open science is to connect you with these, your disciplinary siblings, and help you work faster, work better, and have more fun discovering more by working together than you can on your own. These are the people who can help you the most, and who need your expertise the most. Together you can make science stand up and dance in the infinite game.

Doing science means getting to play the infinite game for real. Doing science means unleashing your passion for knowledge exploration and diving into your research. Doing science means sparking the same passion for learning in your students. The role of open science in your life and for your research and teaching — and through the places where you work and collaborate — is to release you from manufactured scarcity, ersatz excellence, and the quest for accumulated advantage; from all of the finite games that others use to manage your life for their goals.

References

Carse, James P. Finite and Infinite Games. Ballantine Books, 1987.

Feist, Gregory J. The Psychology of Science and the Origins of the Scientific Mind. New Haven: Yale University Press, 2006.

Feynman, R.P., J. Robbins, H. Sturman, and A. Löhnberg,. The Pleasure of Finding Things Out. Nieuw Amsterdam, 2005.

Hyde, Lewis. The Gift: Creativity and the Artist in the Modern World. Vintage, 2009.

McCarthy, Michael. The Moth Snowstorm: Nature and Joy. New York Review of Books, 2015.

Schultz, K. Being wrong: Adventures in the margin of error. Granta Books, 2011.

Taleb, N.N. Antifragile: Things That Gain from Disorder (Vol. 3). Random House Incorporated, 2012.

Washburn, Jennifer. University, Inc.: The Corporate Corruption of Higher Education. Basic Books, 2008.


[1] The infinite game is anti-fragile. This is another reason for its unknowability and another clue that it’s a long-game. Shane Parrish in the Farnam Street Blog <https://fs.blog/2014/04/antifragile-a-definition/> describes Nasim Taleb’s ( 2012) concept of “antifragility” this way:

“Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet. And antifragility determines the boundary between what is living and organic (or complex), say, the human body, and what is inert, say, a physical object like the stapler on your desk.”

[2] Science is bounded to concepts/theories that can be “falsified”:

“Ever since the 1930s, when Karl Popper first argued for falsification as the main criterion for demarcating science from nonscience, the topic of “pseudoscience” has played an important role in the philosophy of science. Just because someone claims to be doing science or to be a scientist does not mean they are. Popper argued that if the theory did not put forth predictions that were “brittle” and potentially “falsifiable,” then they were not science. Theories that can be twisted post hoc to explain any kind of experimental outcome are not science” (Feist, 2006).

Think of science like an incurable intellectual disease (Part 3)

welcoming-new-members
ESIP welcomes first-time meeting goers

GO TO PART ONE if you haven’t read it yet…

Part 3: Platforms and Norms: There’s a commons in your science future

Science is broken: Who’s got the duct tape and WD40?

So, here we are, Act III.

Act I was all about how personal science is. Scientists are individually infected with their own science quest. Act II was about how social science is. Why else would they take a hundred-thousand airline flights a year to gather in workshops and solve problems together (well, apart from the miles)? Act III needs to be about culture and technology. But not so much about the content of culture and the features of technology. Rather, about the doing of culture and the uses of technology.

Yes, the sciences are broken. Some part of this rupture was built-in (Merton, who outlined scientific norms in the 1940s, also outlined the integral tensions that disrupted these—i.e., the Matthew effect). But much of the damage has come from the displacement of the academy within society that has warped the culture of science.

Yochai Benker generally describes the tensions of this warping as “three dimensions of power”. These power dimensions (hierarchy, intellectual property, and the neoliberal need to always show more returns) work against science as a mode of peer production that self-commits to shared norms. Science needs to find alternative means to fight hierarchy, share its goods, and own its own returns.

The sciences are stuck and fractured, in need of both WD40 and duct tape—culture change and technological support. Scientists need to operationalize open sharing and collective learning. For this, they must discard the institutions that enable the above dimensions of power in favor of new communities and clubs (in Neylon’s sense of the term) that can house cultures of commoning, and activate global peer production.

At a recent workshop where the topic of the “scholarly commons” was the theme, I was again impressed by descriptions of how these dimensions of power are locally applied in academic institutions across the planet. The workshop was designed to arrive at a consensus on a universal statement, a short list of principles, such as a restatement of Merton’s norms. Instead, the organizers were reminded that these so-called universal principles could only be accepted as suggestions. These would need to be locally reexamined, reconfigured, reauthorized and only then applied as needed against the institutional cultural situation at hand. Here is another look at the dynamics of that workshop. 

Earlier in the Summer, I attended a breakout session at the ESIP Meeting where a long discussion about building an Earth science data commons concluded that ESIP was either already one, or ready to be one. A second determination was that ESIP was about the right size for this task, that multiple data commons could be built across the academy on the model of ESIP, but with their own sui generis culture and logic of practice, geared to local conditions and particular science needs.

The real question is not how to create the scholarly commons, but rather how to rescue (or re-place) current academic institutions using commons-based economies, and using the various norms of commoning as a baseline for the shared cultural practice of open science. The real task is then how to help move this process forward.

If commoning is the WD40 to release science for the sclerotic hold of its 19th Century institutions (Side note: Michelle Brook is assembling a list of learned societies in the UK. This list is already has  more than 800 entries), technology is the duct tape needed to help these hundreds and thousands of commons communities work in concert across the globe. The internet—which science needs to find out how to use as a lateral-learning tool at least as well as the global skateboarding community already does—holds the future of science. Shared community platforms, such as Trellis, now under construction at the AAAS, or the Open Science Framework, from the Center for Open Science can help solve the problems created by a thousand science communities supporting hundreds of thousands of clusters (collectives) needing to discover each others’ work in real time.

For commoning to gain traction in the academy, we must first explore this as a generative practice for open science. But as each commons spins up its own variety of commoning, we need to avoid prescribing universal norms for them. Instead, the most productive next step might be to unleash a more profound understanding of the circumstances of scholarly commoning by building a set of design patterns that will be localized and applied as needed to yank local institutions away from hierarchy, intellectual property wrongs, and the pull of the margins that preempt ethical decisions and norms.

Next summer, the ESIP Federation is hoping to host a two-day charrette at its Summer Meeting in Bloomington Indiana to begin the process of building scholarly commons patterns. A pattern lexicon for scholarly commoning will potentially help hundreds of science communities self-govern their own open resources and commoners.

Lessons learned (Parts 1-3):

  1. Science is intensely personal. Scientists are already engaged in their own struggle with the unknowns they hope to defeat. Their intellectual disease is fortunately incurable.
  2. Science is already social. Just in the US, several thousand workshops a year evidence the scientific need/desire to build collective knowledge.
  3. Science is cultural. Self-governed science communities can use intentional cultural practices to help scientists prepare to work together in virtual organizations with shared norms and resources.
  4. Community opens up arenas for online collaboration. Instant collectives, such as ESIP clusters, can replace expensive workshops and enable scientists to share knowledge and solve problems.
  5. These communities need to consider themselves as commons to replace institutions that have been twisted by the three dimensions of power (hierarchy, intellectual property, and neoliberal economics).
  6. Each commons needs to work locally, attuned to its local situation within science domains and academic institutions.
  7. The academy needs to harness the internet and technology platforms to knit together localized science/data commons into a global web of open shared resources and collective intelligence.

Using Patterns to Design the Scholarly Commons

force2016

Force11 is looking to build an alternative academy based on a scholarly commons that supports the entire research to publication effort.

I just published a blog on the AGU blogspace.  Take a look here, or keep reading to get the gist.

Several groups (e.g., Force11 and theEuropean Commission) are calling for an integrative scholarly commons, where open-science objects—from ideas to published results—can be grown, shared, curated, and mined for new knowledge.

Building a commons is more complex than simply opening up objects to the public. The activity of commoning is what separates a commons from other examples of publicly shared resources. Research into the various commons found across the globe reveals that every successful commons is also an intentional cultural activity. And so, when open-science organizations talk about building a commons, they also need to consider growing a community of commoners.

How do we attain an intentional and reflexive cultural purview of commoning for science? One promising idea is to borrow from the open-access software community’s reliance on design patterns. Software design patterns reveal solution spaces and offer a shared vocabulary for software design.

A lexicon of design patterns could play the same role for the scholarly commons (See also: Patterns of Commoning). Since every commons requires a different set of practices suited to its peculiar circumstances, various commons within the academy will need to grow their own ways of commoning. The pattern lexicon would be expanded and improved as these scholarly commons emerge and grow.

Developing a pattern lexicon for the scholarly commons is an important and timely step in the move to an open-science future. Design patterns for a scholarly commons can reveal some promising solution spaces for this challenge, helping the academy make a transition from archaic print- and market-based models to commons models based on open network platforms.

Acknowledgements: Thanks to David Bollier for his contributions to this post.

5 signs that you need to rethink and reboot your membership engagement effort

Members feeling disengaged?  Maybe you’re doing it wrong.
Members feeling disengaged? Maybe you’re doing it wrong.

In your volunteer-run, virtual organization, how do your members become engaged in sharing their time and knowledge? Do they come away from these activities enthused? Or do they feel like they never want to come back? Here are five danger signals that mean you should rethink and possibly reboot your organization.

  1. You can’t agree on what engagement is.
    What are your metrics for engagement? How are you collecting these? What does engagement look like in your organization? If you cannot answer these questions, then you need to start over and rethink why anybody should become a member.
  2. When members tell you what’s important to them, you have no way to respond.
    Engagement is where your organization shows it’s value to its members. Your members are intelligent, enthusiastic, and busy. They showed up. Every member needs to be able to find support to do what is important for them (inside the boundaries of the vision/goal of the organization). When your organization can amplify the efforts of each member to solve their immediate problem or support their creative input, they will be engaged. And they will engage each other. Remember the first rule of a volunteer organization: each member needs to get more than they give. Members need every reason to come back and bring their colleagues. When a new member shows up and tells your staff, “I really need to solve this problem” that becomes a priority for your organization. If it’s not then you need to start over.
  3. You’ve invented a list of tasks that you want volunteers to work on.They need to chose from this list if they want to engage with your organization.
    Helping the organization with higher-level organizational work: planning, strategy, etc., is not engaging. It’s a service. This is something that people who are already engaged will do in small doses. In volunteer-run organizations members eat the pudding first, and then get the meat. If your answer to a member is to look at a web-page with a list to things you want them to do, then you need to start over.
  4. You’ve got an “engagement team” instead of being an engagement organization.
    Volunteer-run organizations are propelled by engagement. This is the locomotive that pushes all other activities. If your organization has an engagement team somewhere trying to figure things out, then you’ve lost your locomotive and you’ll only grow and move as fast as the team can pump a hand car. If engagement is not your first order of business, then you need to start over.
  5. Nobody is certain how decisions are made.
    Engagement runs on trust and and is propelled by a governance that is open and responsive. Members of volunteer-run organizations need to know they are in control. Every time a decision is rethought or rescinded by the staff or through some back-door conversation with donors; every time the membership only gets to vote on a document somebody else wrote, every election where the nominations fall to the same people: members become less engaged. If your governance is not actually run by the volunteers who are your members, then you need to start over.

Photo credits: poor doggie: bull-dog story

EarthCube is poised to start its mission to transform the geosciences

The red areas are sandstone.
The red areas are sandstone.

Here is the current vision statement of EarthCube

EarthCube enables transformative geoscience by fostering a community committed to providing unprecedented discovery, access, and analysis of geoscience data.

The primary goal of membership in EarthCube, and indeed of the entire culture of the EarthCube organization is to support this vision. The EarthCube vision describes a future where geoscience data is openly shared, and where a new science, one based on an abundance of sharable data, assembles new knowledge about our planet. Certainly shared open source software and open access publishing are anticipated in this vision. The vision accepts that it will take a committed community of domain and data scientists to realize this goal.

What can we predict about the culture of a community committed to transformational geosciences? How is this different from the culture of a community pursuing geoscience currently? We need to start building out our imagination of what transformative geoscience will look like and do.  One thing we might agree on is that this will be a much more open and collaborative effort.

Unprecedented data discovery, access, and analysis in the geosciences coupled with open science best practices will drive knowledge production to a new plateau. Many of today’s grand challenge questions about climate change, water cycles, human population interaction with ecosystems, and other arenas will no long be refractory to solution. For now, we can call the engine for this process “Open Geosciences” or OG for short.  What will OG pioneers be doing, and how can EarthCube foster these activities?

  • Pioneering OG scientists will collect new data using shared methodologies, workflows, and data formats.
  • These OG scientists will describe their data effectively (through shared metadata) and contribute this to a shared repository.
  • OG scientists will analyze their data with software tools that collect and maintain a record of the data provenance as well as metrics on the software platform.
  • OG scientists will report out their findings in open access publications, with links to the data and software.
  • OG scientists will peer review and add value to the work of others in open review systems.
  • OG domain and data scientists will reuse open data to synthesize new knowledge, and to build and calibrate models.
  • OG software engineers will collaborate on open software to improve capabilities and sustainability.
  • OG scientists will share more than data. They will share ideas, and null results, questions and problems, building on the network effect of organizations such as EarthCube to grow collective intelligence.
  • OG science funding agencies will work with OG communities to streamline research priority decisions and access to funding.

 At this stage, EarthCube is in its most institutionally reflexive moment and is most responsive to new ideas. Like a Silicon Valley start-up flush with cash and enthusiasm, EarthCube is poised to build its future up from the ground. EarthCube can succeed in its vision without attempted to directly influence the embedded cultures of government organizations, tier one universities, professional societies, and commercial publishers. EarthCube will succeed by building its own intentional culture, starting with its membership model and focused on its vision. EarthCube will only transform geoscience by proving that its members can do better science faster and cheaper through their commitment to the modes of scientific collaboration now made possible through EarthCube. EarthCube will transform science by transforming the practices and the attitudes of its own members.

NASA image by Robert Simmon with ASTER data. Caption by Holli Riebeek with information and review provided by David Mayer, Robert Simmon, and Michael Abrams.

Hitting the target makes all the difference for the software life cycle

Sky diver jumping from plane

At a recent, NSF-funded workshop that was looking at how a new institute might help scientists become better stewards of the software they create for their research, a day was devoted to discussing the entire software life cycle, and the differences between commercial software, open-source, community-led software, and academic science software. A long list to positives and negatives accumulated to describe the triumphs and the pitfalls of each of these arenas for software development. Most of the triumphs were in the commercial software column, and the great majority of pitfalls were common to science software development.

That evening, upon reflection, it occurred to me that commercial software was simply very good at determining a target (feature and/or customer) and then hitting this target. It seemed like academic software developers, admittedly working on shoestring budgets only seemed to cobble together whatever feature their next experiment might require, with the result being software that was almost secreted over time (I almost said excreted…) instead of crafted for the long-haul.

It struck me—reflecting back on my single skydiving adventure, in the days where you still took your first jump solo on a static line—that my focus at that time had been narrowed down to the single fear of getting out of the plane. I did not want to freeze in the door. Consequently, I seemed to have not paid as close attention as I might to what happens next. As a result I ended up landing in a field away from the airport (upside: I did not land on the Interstate). I hit ground, no problem, and without breaking anything, but I missed the target completely.

Again, commercial software developers are firmly focused on their targets, and they make software that helps others find the same target too. To do this they know how to jump and when to pivot in order to land right on that X. When Instagram created its software it focused on the simplicity of sharing photos.

Open-source, community-led software tends to lose that target focus, in part because the developer community usually has several simultaneous targets in mind. What they are good at is designing the parachute and the gadgets that help to figure altitude and wind. They make jumping safer and more fun, and their goal is to enable more people to do the same.

Getting back to science software developers, these are often individuals or small teams working as a part of a larger project. They wrangle the datasets and finagle some visualizations. They add a button or a drop-down list and call it a GUI. They tell their team how to use it and what not to do so it won’t crash. Then they go ahead and do their experiments and write it up. In software life cycle terms, all they know how to do it jump out of the plane. Forget the target, never mind the parachute…just jump.

The goal of the NSF workshop was to help design an institute that would support better software development practices across the environmental and earth sciences. To do that, science software developers need to focus all the way to common targets of resourceful, reliable, and reusable software. Do you have some ideas? Feel free to join the ongoing conversation at the ISEES Google+ Community.

Photo Credit: CC licensed on Flickr by US Air Force

The next generation of environmental software needs a vision and some help

ISEES1

At a three day workshop, a group of scientists explored a vision of the “grand challenges” that eco- and earth science face in the coming decade. Each of these challenges, if answered, would provide invaluable new knowledge to resource planners and managers across the planet. And every challenge contained a workflow that called upon software capabilities, many of which do not currently exist: capabilities to handle remote and in situ observations and environmental model output in order to incorporate multiple data layers and models at several resolutions, from a prairie to the planet. Water cycles, pollution streams, carbon sequestration, climate modeling, soil dynamics, and food systems—achieving the next plateau of understanding these processes will require a massive investment in computing and software. The reason for this workshop was to help inform a new institute that can provide key services to make this investment pay off.

Much of this software will be built by research teams that propose projects to solve these grand challenges. These teams will be multi-institutional and are likely to be more focused on the science side of their project, and less on the value their software might acquire by being built on standards, using best-practice coding, and ready for reuse by others. The history of federally-funded science software is crowded with abandoned ad hoc project-based software services and products. I’ve helped to author some of these. One of the federally-funded products (a science education software package) I helped produce had its home-page URL baked into its user interface. After the project funding ended, the PI did not renew the domain name, and this was picked up by a Ukrainian hacker, who used it as the front end of a pornography portal. So the software UI (distributed in hundreds of DVDs) now pointed students to a porn site. A far more prevalent issue is that of software built with 3rd-party services (remember HyperCard?) that have subsequently changed or died, breaking the software after the funding is gone and the programmer has moved on. The point here is that there are dozens of lessons already learned by science software developers, and these need to be assembled and shared with the teams that are building new software.

There is still more value to be added here. A software institute can offer a range of services that will make funded software more reliable, more reusable, and more valuable to science. Much of the federally-funded software development will be done by university staff scientists and graduate students. Most of the latter are in the beginning stages of learning how to program. A crash course on agile programming and Git, or other basic programming skills, could help them get up to speed over a summer. An up-to-date clearinghouse of data and file format issues and recommendations, a help-desk for common CMS and data access problems, and particularly, personal (Skyped) help when the grad student hits a wall: these services can save a funded project from floundering. All together, these services can save the project’s software from an early grave. Research into extending the lifecycle of science software is needed to help science maintain the longer-term provenance of its methods and findings.

Isees2

This workshop was organized by the team that is looking to build the Institute for Sustainable Earth and Environmental Software. Here is their website: http://isees.nceas.ucsb.edu

From carrots and sticks to donuts and heroin: what academic software producers need to learn from their commercial counterparts.

Carrot and Stick

I’ve spent much of the past decade managing software development projects. These projects can be sorted into two types. One type involves collaboration with academic organizations, mainly with government agency funding. The other type is with commercial partners and an eye toward the open marketplace. Software project management for both types is similar in most ways. Both types used the same agile software development process. The agile project management process includes a conversation about user experience and engagement. In fact, it starts with user problems and stories and use cases.

The notion of customer-driven design is a central feature of all good software development. So too is the goal of creating something of immediate use and widespread need. There are some differences that, when teased out, suggest arenas where academic (and other, open-source) software developers might want to learn something from commercial software development practices. The reverse is not as obvious at the software development level, but is more evident in the user licensing and IP level.  At the code level, the process of development and design for academic/government agency software can be quite different than commercial software. This difference is mainly a matter of user expectations. As Doc Searls noted, “…Microsoft needed to succeed in the commercial marketplace, Linux simply needed to succeed as a useful blob of code” (Searls 2012, Kindle Locations 2262-2263).

A couple of conversations in the academic software code arena can illustrate how far apart these two types are. In the first, I was told that “we can deliver this with the warts showing, as long as it works.” And in the second, someone noted that some combination of “carrot and stick” could be applied to make sure people used the software service. Compare this to the goal that Guy Kawasaki promotes for software: enchantment. “There are many tried-and-true methods to make a buck, yuan, euro, yen, rupee, peso, or drachma. Enchantment is on a different curve: When you enchant people, your goal is not to make money from them or to get them to do what you want, but to fill them with great delight” (Kawasaki 2011, Kindle Locations 185-187). No warts or sticks allowed if your goal is enchantment. In fact, not that many carrots, either.

I countered the carrot and stick suggestion with one of my own, “How about donuts and heroin?” In commercial software development, it’s not uncommon to ask “So, what is the heroin in this software?” The idea is that the customer would be so enchanted with the software that they would gladly use it every day. Even the worst experience should still be a donut, and not a wart, and certainly not a stick.

Certain realities do intrude here. Academic and agency software developers work on the tiniest of budgets. They tackle massive problems to connect to data resources and add value to these. They commonly have no competition, which means they solve every problem on their own. A “useful blob of code” is better than no code at all. But still, they might consider imagining how to enchant their users, and provide a few dimples and donuts along with the worts and the carrots. Because their users spend most of their digital lives on the daily heroin supplied by Apple and Google and Facebook, being handed a carrot may not do the trick.

Kawasaki, Guy (2011-03-08). Enchantment: The Art of Changing Hearts, Minds, and Actions. Penguin Group. Kindle Edition.

Searls, Doc (2012-04-10). The Intention Economy: When Customers Take Charge. Perseus Books Group. Kindle Edition.

Photo credits, CC licensed from Flickr:

carrot and stick: bthomso

carrot on plate: malias

donuts: shutterbean

eating donut: Sidereal