Democracy First: effective governance to grow an active social network

Any social network service is much more than its code, its content, and its communities. It is all of these within a dynamic framework of rights and roles, and of needs and opportunities. All of these opportunities will flourish best if they are built on a thoughtful system of best practices and clear rules.

Your social network platform represents the various groups that must come together to build, maintain, support, and use the service. These groups include teams of developers/administrators, some number of key sponsors/funders, member organizations, and member individuals. Each of these groups has interests that you want to fulfill.

These interests can be identified by the issues they engender: funding, community (leadership, reputation), privacy (policy creation), sharing (licensing), branding, technology (features and standards), and policing (boundary control for content and bad behavior). All of these issues (funding excepted) can be addressed over time through the right kind of governance system. This system is the garden where communities can grow.

Too often, software services (even currently successful ones such as Facebook and Wikipedia) paid too little attention to governance in their infancy. This failure has long-ranging consequences, some of which are now becoming evident in these early Web 2.0 experiments. Best practices suggest that governance needs to be considered up-front, at the same time as software design.

One of these best practices is to get the your members involved in devising (and then owning) the governance system. So the plan is to first create a starting point: defining membership within the network, and then facilitate the members to create the system.

The goal is to build a nimble system that rewards sponsors for their support, enables open-source software development, encourages organizations to add their members, and gives each member not simply a voice, but a say in how their network runs.

Photo Credit: CC license on Flickr by undersound

Why should I move my webspace onto a full Web 2.0 platform?

One of the opportunities created by moving your program’s content onto a robust Web 2.0 platform is the ability to add social networking (peer-to-peer conversations) and social media (comments, ratings, and tagging of content) capabilities to your user community’s activities. You also gain new lines of communication with your service users and community members.

Developing on one of the leading Web 2.0 CMS platforms will also accelerate the means to integrate other important tools (such as Google language translation capabilities) into the your program. Content curation capabilities can also be assigned to the appropriate employees and volunteers, so that content updating is a shared responsibility and requires limited technical support.

Your multi-year technology budget will need to be front-loaded to trade some monies for time in order to get this online within months. However, the cost on maintaining this system will be significantly lower over the period of the contract.

Photo Credit: Scott Beale / Laughing Squid

Post Publication Peer Review: It’s in your future

I recently attended the PLoS Forum in San Francisco. For a couple years, I’ve been encouraging PLoS to find a way to experiment with post-publication peer review. However, the pathway from the current academic peer review system to something potentially better (faster, fairer, more precise) must first overcome the enormous weight of influence that the current publication system holds for academic careers. I was encouraged that half of the day was spent trying to figure out how to move ahead with post-publication peer review.

Here is an excerpt from a Knol I wrote about scaffolding a new system based upon the reputation system of the old system:

“The real sticking points preventing scientific communication from taking full advantage of digital distribution are the following: 1) top ranked journals have cornered the reputation economy in terms of impact on tenure (they are a virtual whuffieopoly:  for the term “whuffie” see Hunt and Doctorow). 2) the very same journals remain locked into the 20th century (with resemblances to prior centuries) print-based publishing model, built on blind peer review and informed by the scarcity of space available in any printed journal. The task then, is to release them from their print-based constraints, while rewarding and supporting them to continue to be a high-end filter for quality science; and then transitioning their whuffie-abilities to a form more suited to the rapid digital dissemination of scientific outcomes. The academy needs great filters to help guide readers to the best science among hundreds of thousands of new papers every year. Universities need fair and broad feedback from the academic community to decide which faculty deserve promotion. The research community needs to accelerate publication speed and minimize editorial overhead. And the public needs markers that help them determine good science from the rest. Open-access content is the first step. The next step might need some badges.”

You can read the whole piece at Post-Publication Peer Review in the Digital Age


IEEE escience meeting at Oxford: 4th Paradigm

The first paradigm is experimental science. The second paradigm is theoretical science, and the third, computational science. The forth is data-intensive science. This data-intensive science paradigm is also a feature of the emerging datafullness of the object of study. Satellites and sensorwebs, CCTVs and Streetviews, MRIs and CAT scans, Facebook and YouTube– what we study is no longer data poor, but increasingly data-full. The question is no longer one of how to scrape up enough data to create a study, but rather how to winnow the emerging data deluge. Sociologists can no more ignore the data available from online social networks than meteorologists can ignore an emerging Mid-Atlantic tropical depression.

In his talk at the IEEE eScience meeting, Jeff Dozier also mentioned that earth sciences are entering a new task horizon. In the1800-1900s, the earth sciences were discipline oriented sciences. From the 1980s+ we saw the development of earth system science. Emerging now: earth knowledge in service of policy to address planetary risks, such as climate change.

The eScience challenges are many here. The increase in observational data make it possible to refine the resolution of climate models, which push the limits of available HPC resources. The data processing algorithms designed for science must be made robust enough to sustain resource and environmental enforcement decisions. New venues for communication between scientists, data providers, and policy decision makers need to be supported and used. This is a real opportunity for organizations such as the ESIP Federation to become active forums for problem solving.

Photo Credit: NASA Earth Observatory

Microsoft Research’s 4th Paradigm ebook is available under a CC license here: http://bit.ly/5fs21q

#3 All Hands e-Science meeting at Oxford University

Software as a Service and Software as a science: keynote by Tony Hoare (Microsoft scientist from Cambridge).

http://research.microsoft.com/en-us/people/thoare/

The e-Science effort in the UK was to ensure that digital information technologies would have as great an impact on the practice of science as it was having in telecommunications, entertainment, and other aspects of society.

In the human genome project, the people who were funded did not promise to cure a single patient in the first 15 years. The notion was that the overall knowledge gain was so significant that future advances in medical knowledge would ensue. In the same way, the growth of digital tools in science will not necessarily pay-off in the short term, but will build, over time, those new tools that will move science to a new level of capability.

The computer engineers that are engaged in e-science research are not just of service to “real scientists” but are also engaged in a real engineering science. And so Professor Hoare argues that the software products are not just a service to others but also the outcome of a science as “real” as chemistry or physics.

Having browsed the booths and the breakouts, I can say that the entire meeting, 600 people talking and listening for 5 days, rolls on three wheels: high performance computing (and pooled data storage), and the means to distribute this  capability for scientists in multiple locations; science tools and services built on top of this data/computing network; and collaboration practices that promote and manage a range of sharing from data sharing, to shared experiments, to the (open access) publication of results.

The engineering of the HPC infrastructure and the building of the services on top of these are not the real transformative levers of e-science. They mostly add efficiency and distribute resources more widely, so that science does not need to happen in a few concentrated locations (research labs at selected universities and corporate locations). This distribution of effort extends regionally, and eventually, globally. But this capability and the tools that allow its use replace similar tools that scientists at selected universities already use.

The promise of new collaboration practices is where e-science has the potential to transform science in ways that are both intended and unintended. Last evening after dining at “high-table” at Christchurch College, I had a spirited conversation with a fellow on the phenomenon of Wikipedia. He was astonished by the amount of trust that users had in the quality of Wikipedia. I countered that the main value of Wikipedia was its ability to cover an amazing number of topics, far more than any previous encyclopedia. The real value of Wikipedia was its range, I proposed. This value was achieved the only way possible: by reinventing the role of the author/editor. Similarly, e-science will gain its promise only when it reinvents what it means to do science; who can do it; how it’s reviewed; where its published; how it’s used. Very little of this promise will simply grow from improvements in HPC and tools. Much of this will emerge as new users and new collaborative opportunities arise.

Photo Credit: NASA Earth Observatory

#2 All Hands e-Science meeting at Oxford University

Tom Rodden is looking at the history of e-Science, moving from infrastructure to collaborative tools (e.g., MyExperiment). After all the digital world is in the foreground of their lives. 1.5 billion Internet users in 2010. The more that our lives are performed on digital platforms, the larger footprint we leave.  Google uses this footprint to target advertising. The next stop is uniquitous computing lifestyle.Hew then do we build a contextual footprint as a conscious activity. Computers will be able to sense human activities and use this sense to enable new forms of interaction.

Some gathered quotes:

“Half the world’s people have never made a phone call: 1990s.”

“Half the World will use a Mobile phone by 2010.”

“By year end 2012, physical sensors will create 20 percent of non-video internet traffic.” (Gartner group).

Mobile phone use becomes a means of credit rating in countries with little credit history. Tom looks at the technology of amusement parks, where research is creating “fear sensors” that help park rides maintain an optimal amount of terror for each customer. Digital location services will help people find and share transportation services in real time. When DARPA released 10 red balloons, the main challenge was to create the reward system to get enough people to work together.

Crowd sourcing: ReCaptcha and the search for Steve Fossett are examples of crowds enlisted for a common good. These are just the beginning of public engagement in digital crowd activities. As we become ever more embedded in digital activities, we need to remember: “What matters is not technology itself, but its relationship to us”

Mark Weiser and John Seely Brown (1996).

Rodden is wary of the imbalance of knowledge/power when digital services can collect an ever widening swath of information about our human endeavors. How do we track this information flow? How do we resist?

Photo Credit: NASA Earth Observatory

#1 All Hands e-Science meeting at Oxford University

Anne Trefethen from Oxford is opening up the All Hands e-Science meeting. 186 submissions for presentations shows the growth of interest and activity in the UK for Anne Trefethen from Oxford is opening up the All Hands e-Science meeting. 186 submissions for presentations shows the growth of interest and activity in the UK for e-Science research and practice. The meeting is on the outskirts of Oxford, at the football (soccer) stadium conference center. Next door (across the parking lot) is a bowling alley and multiplex cinema. No building older than 50 years anywhere in the vicinity. So the location looks more like Oxnard than Oxford. The crowd is appropriately geeky in an academic fashion. The opening keynote (Helen Bailey) is a dancer, talking about e-Science on practice-led research. Where does e-Science lie in the larger field of technology? Is it simply science research informatics? Is it centrally HPC? Is it science 101 (hint… ASCII)? The “e” stands for “electronic,” an extension from e-mail and/or e-commerce; both of the latter refer to internet-enabled transactions. Much of the “e” in e-science involves the use of networks of computers to enable collaborations across locations. The research “transactions” flow beyond single laboratories/universities.

Helen Bailey uses e-Science to build co-located dance performances where their are dancers from multiple locations in a single dance arena (using video feeds). This research focusses on the synchronous capabilities of an HPC network to support multiple video feeds in order to assemble a real-time event.

Helen’s website: http://www.beds.ac.uk/departments/pae/staff/helen-bailey

Photo Credit: http://www.arts-humanities.net/system/files/images/edance.jpg

Thinking about eScience…

 

From 2009: the Wilbanks interview is still excellent.

In a couple weeks I will be off to Oxford, England for the All Hands eScience and IEEE eScience joint meeting. I’m looking ahead to blogging and Tweeting about what is happening there. I would guess that most of the ESIPers will be headed west to the AGU meeting in San Francisco. eScience is a big topic, and it covers a lot of ground, from informatics to the governance of virtual organizations. A lot of ESIPs are already supporting eScience through the ESIP WIKIs and the SOAP services they provide for data access an manipulation. So… what’s next in eScience. That’s what I’m looking for. John Wilbanks from the Science Commons had a great quote recently: ‘If we can lower the cost of failure and increase the interconnection and discoverability of the things we actually know, it’s one of the only non-miraculous ways to systematically increase the odds in our favor to discover drugs, understand climate change, and generally make good choices in a complex world,’http://bit.ly/70kZvm

I’ll post blogs here and also at ESIPfed.org.  Need to see how searchable the ESIPFed site is.

Photo Credit: http://www.flickr.com/photos/28634332@N05/3952766831/sizes/s/ NASA1fan/msfc   cc licensed

Delivering the Goods of Democracy for your VO

Crates1

One of the first conversations I have with people who have been tasked to build or manage a virtual organization centers on the cost/benefit issues of democratic governance. Given the usual shortage of funding and time, they have real concerns about the effort required to build a community-based governance system. These concerns are usually layered on top of the more general concern that the community (or rather, certain activists within the community) may use the governance system to push the organization’s goals toward their own interests.

Certainly, democratic governance increases the overhead (in terms of time and effort) spent on governance. Top-down decision making can be quite efficient up to the point where it tends to fail rather abruptly. Democratic governance is also more prone to being gamed by people with time and interest to do so. This is where the community comes in to play. When you build in enough democracy to give the community the opportunity to really govern, it will tend to resist the efforts of certain individuals to subvert this opportunity. This is one of the goods that democracy delivers to your VO. Continue reading “Delivering the Goods of Democracy for your VO”

Gathering privately online: the key to democracy

lock

The door has a lock for a reason. Inside there is a group discussing their political choices and potential actions. In the US, this group has a right to assemble guaranteed by the First Amendment of the US Constitution. However, this right is also contingent on the ability of group members to meet in private. And so the right to assemble also requires that the government not record who has assembled and what was said.
On the internet, there are few ways of hiding one’s identity (as this might be matched to the use of a computer) when you are conducting a virtual meeting. For virtual democracy to flourish, we need to find more ways to protect our presence online. This remains a software problem beyond the choices that people might make to reveal or conceal their identities. We need some sort of “anonymizer” service.
The EFF has also noted that we are physically tracked by the same devices we use to establish locations (the iPhone’s location services is a good example).  Check out this discussion of the EFF site.

Let’s find a way to lock the door on the internet!

photo credit: Doggie52 on Flickr, used under CC license