Hulk want Negroponte Shift in publishing now

It’s one of those weeks where the clanking chains of the ancient devices of academic publishers have been more than a bit annoying, and it looks like no amount of WD40 will smooth the transition into digital delivery without first demolishing these anachronistic machines and their devilish DRM schemes.

This morning on NPR there was a bit on how public libraries need to subscribe each year to access the same digital files for eBooks, in order to provide these in serial increments to individual users. No overdue books here, the narrator notes, the digital files simply disappear from the user’s device, forcing them to queue up (and wait for weeks or months) until the digital file is again available. The more popular the book, the longer the wait.

I also had the opportunity to search for and find a book published by an academic press in Europe, and was informed by their website that I could download a digital copy for only $100+. Makes the iTunes bookstore seem cheap.

And then, with a link from William Gunn, I made myself read a response to the open access demands by scholars, students, and libraries, by a (IMHO fatuous) mouthpiece of the publishing industry in the Guardian. At one point he justifies the bloated profits of the industry by noting that they pay taxes on these (not if they can help it) and the government uses these taxes to fund (wait for it)… research. For profit academic publishers are the “research producers” that keep the wheels of science rolling. Lord help us if the (socialist) open access lumpen masses get their way.

On the more hopeful side, John Wilbanks and the Open Access Gang of Four are most of the way to a successful open access for research petition on the White House petition site. And if only the intern who programmed the Drupal user authentication for that site had hooked in a better module, then it’s likely that the necessary 25,000 signatures would have already been accomplished.

At last look, over at The Cost of Knowledge, 11,923 scholars have pledged to not give their services to Elsevier.

And, today we will find out if Redditors can oust Texas representative Lamar Smith, who co-authored SOPA.

We are all waiting for everything to digital and available and searchable and browsable, and linked, and curated, filtered and yet without the bubble, semantically rich, and, of course, free. We are not there yet, plenty of cruft to clear away. Time to point the Hulk at the entire academic publishing enterprise and say “SMASH.” It couldn’t hurt.

Use a virtual organization to borrow enough requisite variety to innovate in a data-rich world

Photo from Flickr, used with CC license. by

What can you do when your research team says, “This is way too complex.”

When your government agency or university laboratory looks to innovate in a world where multiple/large data inputs are coming on line, how can you stay ahead of the inherent complexity of the systems you are creating/interrogating? One way to look at this problem is through Ashby’s principle/law of requisite variety. A principle of cybernetic management, requisite variety notes that unless the control system has at least the variety of the environment it controls, it will fail. Which actually means that some part of the environment will be controlled elsewhere. Elsewhere is also where innovation happens; because unless you can corral the inherent variety of the problem you face, it will seem too complex for your team to innovate a response (Kofman [1]).  You can either go out and hire a bigger team, or you can borrow enough requisite variety just long enough to bring your own team up to speed. That is a great use for a virtual organization (VO).

Theorists of knowledge management have applied Ashby’s law in various modes, including a thread of interest in what is called a “learning organization” and mode of business communications management known as “systems thinking.” [There is a great amount of information about this available at the Society for Organizational Learning]  The point they make is that the team you build to tackle a tough problem needs to have enough of a portfolio of knowledge and skills to address all parts of the problem. Not only that, but they need to communicate their skills and knowledge to one another so that each team member shares in this collective intelligence. Andrew Van de Ven put it this way, “Requisite variety is more nearly achieved by making environmental scanning a responsibility of all unit members, and by recruiting personnel within the innovation unit who understand and have access to each of the key environmental stakeholder groups or issues that affect the innovation’s development.”  (Van de Ven, 600).

Virtual organizations include online communities, research collaboratories, open source software programmer collectives, and other groups in a great variety of arenas and professions. What they offer is an open network of common interest and complementary talents. When your business or agency is looking to innovate in a world where data are more plentiful than insights (Abbott, 114) then it makes great sense, in terms of time and effort, to join a VO and gain enough requisite variety to conquer complexity and kickstart some innovation.


Abbott, Mark R. (2009) “A New Path for Science?” in The Fourth Paradigm: Data-Intensive Scientific Discovery. Hey, Tony, Stewart Tansley, and Kristin Tolle, eds.  Pp. 111-116. Redmond: Microsoft Research.

Fred Kofman [1]

Van de Ven, Andrew H. (1986) “Central Problems in the Management of Innovation.” Management Science. Vol. 32. No. 5. (May.) pp. 590-607.

Post Publication Peer Review: It’s in your future

I recently attended the PLoS Forum in San Francisco. For a couple years, I’ve been encouraging PLoS to find a way to experiment with post-publication peer review. However, the pathway from the current academic peer review system to something potentially better (faster, fairer, more precise) must first overcome the enormous weight of influence that the current publication system holds for academic careers. I was encouraged that half of the day was spent trying to figure out how to move ahead with post-publication peer review.

Here is an excerpt from a Knol I wrote about scaffolding a new system based upon the reputation system of the old system:

“The real sticking points preventing scientific communication from taking full advantage of digital distribution are the following: 1) top ranked journals have cornered the reputation economy in terms of impact on tenure (they are a virtual whuffieopoly:  for the term “whuffie” see Hunt and Doctorow). 2) the very same journals remain locked into the 20th century (with resemblances to prior centuries) print-based publishing model, built on blind peer review and informed by the scarcity of space available in any printed journal. The task then, is to release them from their print-based constraints, while rewarding and supporting them to continue to be a high-end filter for quality science; and then transitioning their whuffie-abilities to a form more suited to the rapid digital dissemination of scientific outcomes. The academy needs great filters to help guide readers to the best science among hundreds of thousands of new papers every year. Universities need fair and broad feedback from the academic community to decide which faculty deserve promotion. The research community needs to accelerate publication speed and minimize editorial overhead. And the public needs markers that help them determine good science from the rest. Open-access content is the first step. The next step might need some badges.”

You can read the whole piece at Post-Publication Peer Review in the Digital Age

IEEE escience meeting at Oxford: 4th Paradigm

The first paradigm is experimental science. The second paradigm is theoretical science, and the third, computational science. The forth is data-intensive science. This data-intensive science paradigm is also a feature of the emerging datafullness of the object of study. Satellites and sensorwebs, CCTVs and Streetviews, MRIs and CAT scans, Facebook and YouTube– what we study is no longer data poor, but increasingly data-full. The question is no longer one of how to scrape up enough data to create a study, but rather how to winnow the emerging data deluge. Sociologists can no more ignore the data available from online social networks than meteorologists can ignore an emerging Mid-Atlantic tropical depression.

In his talk at the IEEE eScience meeting, Jeff Dozier also mentioned that earth sciences are entering a new task horizon. In the1800-1900s, the earth sciences were discipline oriented sciences. From the 1980s+ we saw the development of earth system science. Emerging now: earth knowledge in service of policy to address planetary risks, such as climate change.

The eScience challenges are many here. The increase in observational data make it possible to refine the resolution of climate models, which push the limits of available HPC resources. The data processing algorithms designed for science must be made robust enough to sustain resource and environmental enforcement decisions. New venues for communication between scientists, data providers, and policy decision makers need to be supported and used. This is a real opportunity for organizations such as the ESIP Federation to become active forums for problem solving.

Photo Credit: NASA Earth Observatory

Microsoft Research’s 4th Paradigm ebook is available under a CC license here: