Risks and Costs of Double-Loop Governance for Your Organization

Yes, you will be giving all your members the power to steer. Image from the Valve New Employee Manual.

Risks and Costs of Double-Loop Governance for Your Organization

There are real costs and real risks in choosing a double-loop governance scheme. Single-loop, top-down management is significantly more efficient in the short run. Funders may expect a management plan that spells out a hierarchy of communication and responsibility. And, if your organization does not need or want to sustain itself for more than a couple years, then double-loop management may be a wrong decision. But if you are looking to build a virtual organization that has a good chance to be sustained for years or decades through community effort (including downstream fund development) and a small staff, then an initial investment in double-loop governance is key. You will need to sell this to your funders as an investment in sustainability.

From the perspective of the founders, the main risks in implementing double-loop governance comes from the ability of the community to alter the founder’s vision for the organization. Double-loop governance lets everybody steer. This means that the direction travelled will happen through a rough consensus. It also means that the vehicle can move rapidly to another direction once everyone is on board with the new vision.

When decisions are owned by the community, the community will express its own vision. Bacon (2009) has some recommendations for start-up community leadership that can provide some added stability to the initial vision during the boot-strapping period. But the final word will belong to the community. If you are building an organization and cannot let go of your own vision of its future and goals, then build it with a single-loop management, and trust that you have enough charisma to hold it together. Otherwise, offer the vision to your members and give them the tools to make this something they can celebrate.

In terms of cost, the main obstacle to double-loop governance is time. It will take additional months of discussion to arrive at a rough consensus about the governance system documents. (ESIP Federation members worked constantly for more than two years to arrive at their final first draft of a constitution and bylaws.) And it will take additional time for subsequent decisions to be vetted by the community before they can be implemented. Transparent decision making also means giving time for member feedback. Fortunately, much of the business plan implementation efforts can be distributed into subgroups which can be given enough self-governance to streamline their decisions and to accomplish work on specific action points in an agile fashion. This is how major open source software efforts are currently organized.

Staffing a double-loop governed organization requires finding people who have enough patience to stick to the processes that have been decided by the community. They also need to resign themselves to the basic idea that each member is also their boss: all members have the right to comment on the ongoing workings of the organization.  Then again, it’s also the case that members have an obligation to recognize the good work of the staff.

References

Earth Cube as a Double Loop organization

I’ve been listening in on the opening discussions at the Earth Cube governance meeting, and I’m impressed by the level of passion and amount of expertise at the table. I’m also interested in how the conversations seem to whiplash from notions of democracy and community to ideas for data standards. People have come to the table with divergent notions of what governance means, although they are also aware that governance can mean both democracy and data standards. I would like to argue that they are looking at the same organization, but they are each describing only one of the governance “loops” that will be needed for Earth Cube.

Take a look at this Keynote talk by Clay Shirky (DrupalCon 2011)

http://archive.org/details/drupalconchi_day2_keynote_clay_shirky

about 45 minutes into the video Clay is talking about organizations that not only fix problems, but that simultaneously can solve the larger issues that created the problems. In these “double-loop” organizations, the members agree to governance rules that solve common problems for their interactions (e.g., data sharing). At the same time, they agree to own these rules, that is, to govern their governance system. And so, when someone talks about building community, protecting expressive capabilities, voting, officers, and working consensus, constitution and bylaws, vision statements and goals: they are approaching Earth Cube governance from the second loop. This is where the members of Earth Cube agree to be its owners. And when someone talks about data sharing policies (enforcement, compliance, standards, etc.) for Earth Cube, they are also bringing to the table issues integral to governance. These are the activities, the goals, and the outcomes of Earth Cube as a distributed/virtual organization.

First loop governance fixes problems for the Earth Cube member community. Second loop governance creates what sociologists call “agency.” This agency is the ability/capability to govern how the community will fix its problems. Does there need to be a committee? Who gets to be on the committee? Who is involved in a decision? Who do you talk to if you feel your voice has not been heard? Second-loop governance is responsible to answer all of these questions. In a typical NSF project, this is called “management.” PI and Co-PIs are charged to create and implement an effective management plan. But who should create and implement an effective second-loop governance plan? The current vision for Earth Cube puts the community into this role. Members of the community are stepping up to guide this process. But a much larger community-wide conversation will need to happen before any second-loop governance plan can be implemented.

What about first-loop governance planning? When should this happen, and how? Initial discussions about the scope of the problems to be fixed and the solution spaces for these fixes will help articulate the amount of (loop-one) governance activities needed to effect the fixes. The model that emerges will guide the second-loop governance planners to better solutions for their level of governance. For example, if community buy-in to a strict set of data standards is needed, then the second-loop governance effort will need to plan to build a strong community. If the main requirement is better communication, then a much weaker community will suffice.  But again, these discussions will need to be revisited after the second-loop governance effort is agreed to by the members. So the governance boot-strapping process will resonate between the two loops until the initial governance plan is accepted by the Earth Cube members. At that point, the second-loop governance is empowered to address new fixes, and to fix itself whenever this is needed.

A good example of this can be found in the history of the ESIP Federation. The Federation spent more than two years, and included direct participation by several dozen members before it finalized its constitution and bylaws (the main second-loop outcome). When the final vote was taken, and these documents were accepted by consensus, then the various committees, and emergent working groups and thematic “clusters” were supported to begin to fix problems faced by Federation members: data interoperability and stewardship being chief among these.

Building a governance model for Earth Cube will require looking into both loops: the first loop describes a number of fixes planned to address common problems (mainly around data use and sharing), while the second loop describes how the Earth Cube community can acquire ownership for the decision processes to determine which fixes are most important, and how to engage the broader community in their implementation.

Community Engagement: your agency does not have the budget to not support this

Funded research that includes a charge to engage a community (of scholars, end users, students, etc.) in doing this research, or in using its products (data, metadata, standards, etc.) is usually underfunded. There are two simple reasons for this. The first reason is that agencies typically underestimate the cost of doing community engagement through community governance. The second is that money is a very poor instrument for directly accomplishing engagement, mainly because of the perverse effects this has on volunteerism.

At the same time, efforts to support “community” are often pursued without actually building community-based governance. Project budgets may include large amounts of “participant support” for annual “community meetings” that could help build engagement. But without the governance structure that puts the community in a position to determine key aspects of the core activities, these meetings are really only very expensive alternatives to an email list. The attendees may learn something, and will forge their own interpersonal connections, but the work of building community through governance is left undone.

I was once “voted” onto the policy committee of a large agency-funded effort. The first thing we were told, after they flew us in to the annual “community” meeting, was that the committee did not make policies. Similarly, the act of voting was used without any available model of membership. There are better ways to get the community engaged.

One of the reasons that community engagement has become more visible as a goal is the realization that “network effects” can greatly amplify the impacts of a research project’s outcomes.  Indeed, Metcalfe’s Law tells us that value of a network is proportional to the square of the number of members (n2). Bringing network effects to research collaboratories can accelerate communication of ideas and knowledge sharing. A more recent study by David Reed points to an even larger effect: that of group-forming networks. Networks that allow members to self select into smaller groups (clusters) approach an exponential increase in the scale of interactions (2n), a scale that grows much faster than any power law distribution. Enabling the network to create internal, purposeful subgroups where members are highly engaged is one strategy that can really pay off for a funded research project.

The two curves in these graphs are familiar to most. The top one is the “power law” curve, which describes many distributions, notably, the expected relationship between the amount of attention/activity on an open Internet network in relation to the number of people so engaged. This is the usual “10% of the members do 90% of the work” situation. The bottom curve is the “bell curve” normal distribution, where the main mode is high-activity. What is important to notice here is that the space under the normal curve is much larger than the space under the power law curve. Much more activity is happening here, even if the number of people are the same.

The use of community-led governance to foster engagement is covered in several other posts on this site. There are some real examples of this that can be studied. The Federation of Earth Science Information Partners (ESIPfed.org) is one.  I would guess that many agency program managers can point to histories of counter examples: of projects that never created any governance capabilities, even while they spend huge amounts of budget on “community.”

The amount of real governance required to support activities also depends on what types of activities need to be supported. If communication is the goal, then a relatively weak governance system will suffice. If real collaborations (those purposeful subgroups) are to be supported, then a robust governance system may be needed. Clay Shirky examines three levels of social interaction: communication, coordination, and collaboration. Each of these levels needs its own type of governance.

Distributing activities across an engaged volunteer network of peers can use a very limited budget to accomplish everything that needs to be done. Money is not the driving force here. It is rarely the case that agencies cannot afford to support governance through project research budgets; more to the point, they cannot afford to not support this if they want to accomplish what only a community can do.

Broadband as a public right of way

The public street

This essay was written in support of the Super Santa Barbara 2011 art exhibit on net neutrality

In the forty-one years since UC Santa Barbara became the third node on ARPANET (the government funded precursor to the Internet), generations of Santa Barbarans have been born into lifescapes increasingly dominated by “online time.” The growth of the Internet and of the World Wide Web has become a case study of government support and private initiative working in concert to support a wide range of digital opportunities. In the last several years these opportunities have expanded to the point where we can say that Internet access is now integral to our private and public lives, and to commercial interests and civil society alike.  As remarkable as the past four decades have been, the time has come to make some fundamental decisions about our digital future. In Santa Barbara, one of these decisions will be the construction of new broadband capabilities that will enable full participation in the public sphere and the digital economy.

A recent (2010) study of nationwide broadband services by the US Department of Commerce found that only about 30% of these services actually meet the minimum threshold of “broadband.” Most households paying for broadband in Santa Barbara County through COX, Comcast, Verizon, or satellite are, in fact, not getting anywhere near the minimum services that would count as broadband. And visitors from other nations (or, in my case, a son returning from college abroad) are often amused or appalled by how slow the Internet connections are in Santa Barbara. After pioneering the Internet in 1969, Santa Barbara now finds itself relegated to a digital backwater.

Because Santa Barbara County (and City) have only some tens of thousands households, commercial providers lack the incentives to upgrade services by bringing optical fiber to the home. At the same time, these service providers (which are more accurately described as content providers) argue strongly against the notion of community-run broadband. Advocates of community-run broadband are quick to note that their service is “content neutral.” This means that for-profit content providers can use community Internet to carry their own content. Even as their current, wire-based infrastructure becomes increasingly outmoded, local Internet providers—who refuse invest in optical fiber for Santa Barbara—continue to assert that their original investments need to be protected from future public investments.

The struggle for net neutrality hits the pavement in Santa Barbara as a struggle for community-run  broadband: our only certain road to high-speed broadband access for homes and businesses. One fundamental argument for community-run broadband services comes from the notion of “property-in-common” that communities still use to assert public ownership of rights of way for highways and waterways. The “digital highway” metaphor turns out to be a rather useful notion to support community-sponsored broadband.

In the US, roadways and waterways share a common feature of being held in public and maintained as a public good. Public goods and democratic principles walk hand in hand. To participate in a democracy requires an equal access to information and an egalitarian freedom of political action. Lewis Hyde, in his book Common as Air (2010) notes that when John Adams as a young adult acquired the right to vote (by inheriting property), he also acquired the obligation to help his township maintain a local bridge that was in need of repair. While the US federal government has a history of licensing rights to other forms of utility infrastructure (energy, telecommunications, railways, etc.), roadways and waterways have always been kept as a public trust.

From the start of the nation, roadways and waterways were viewed as key infrastructure for the common weal: for commerce, for travel, and for leisure. They are available for a wide range of uses by and for the public. (Yes, there are few short stretches of private highway in the US, but these are few, and historically have later reverted to state ownership.) The City of Santa Barbara owns and maintains all city streets for public use, and leases out rights of way to private utility companies. Many of the utilities that use the street as a right-of-way are not public. Privately owned electric and telephone poles and wires festoon the streets. Natural gas lines are buried under the streets. Water and sewer lines are operated by public companies. Public and private interests are gathered into a productive partnership to serve the people of Santa Barbara.

Throughout history, public streets and parks have also been used for civic celebrations, farmers markets, political demonstrations, and state spectacles. Streets create spaces for democratic action and for the voices of difference and dissent. Their publicness warrants them for these roles. Today, much of the civil discourse has moved from the park to the blogosphere. The means of dissent and redress are also digital. From Twitter to Wikileaks, the news comes from unexpected sources on the Internet. Today, civil society meets on Facebook more than at the coffee shop.

In its early days as a simple data link between research centers and then as a carrier of electronic mail, ARPANET may have resembled a telegraph system more than it did a city street. However, as the Internet emerged, the range of interests and activities it might carry expanded to the point where, today, it very much holds a place in the digital lifescapes of Santa Barbara as important and diverse as do the City streets and sidewalks for our physical lifestyles. Our daily lives are increasingly digital, and the lack of real, affordable broadband means that we are living on some narrow, digital back alley, instead of the bright throughway we would prefer for our lives.

The people of Santa Barbara deserve a broadband service that can warrant their trust as a platform for civil engagement, education, entertainment, and commerce. Santa Barbara residents and City staff joined together to respond to the Google plan to introduce gigabit Internet to a number of sample locations. The GigabitSantaBarbara.org community helped City staff create its application. We are now waiting for Google to announce its first round of winners.  But win or lose, the future of net neutrality in Santa Barbara will be a brighter future when the City and County remember that broadband is more than a private utility: broadband is as much a public trust as is any city street.

Photo Credit: Some rights reserved by Ben Fredericson (xjrlokix) on Flickr

Academic Publishers, Marketing Myopia, and the Next Phase of Scientific Publication

Several years ago at a DLESE (an NSF-sponsored earth science digital library organization) meeting at Cornell University, a breakout meeting was held on the topic of “relations with academic publishers” (or something similar). The organizer, from a major university press, asked the assembled group of academic participants, “What was your best experience with an academic publisher?” The idea was to reveal best-practices that would inform how the digital library could work in concert with publishers.

What happened next was instructive. The first respondent noted that he could recall no “best experience,” and then told a story of editorial neglect bordering on malfeasance by a noted journal (which will not be herein mentioned). The second respondent, perhaps primed by this story, also confessed that her research had been poorly served by the reviewers and the editors of more than one academic journal. The next five respondents listed still more occasions where their work, or that of their students, had been mishandled, delayed, derailed, and purloined during publication. The organizer struggled to redirect the conversation, but by the end of the hour virtually every person in the room had confessed their frustration, their anger (tinged with rage at moments), and their doubts about the entire enterprise of academic publication.

Publishers might respond that academics always feel that their work is under-valued and resent the changes reviewers suggest and editors demand. This response, while accurate, avoids the larger issues now at play in the “ecology” of academic publication. Publishers might want to frame the problem as simply one of the difficulty of vetting and polishing high-quality work. However, for the people (the content providers) in the breakout meeting the problem was that the process of academic publication is failing to support the goals of science in general, and the needs of their research (and the process for their careers) specifically. At the end of the break-out meeting, the mood of the room was something like a kinship of people who had been taken hostage by a dysfunctional service they could neither influence nor avoid.

With new opportunities currently unfolding for rapid digital dissemination of academic content, the tensions between academic publishers and the people who are, simultaneously, their content providers and main customers may find resolution in a combination of technology and culture. Academics are positioned to move ahead into emerging digital distribution channels. Commercial and non-profit publishers that cling to the logic of their historic, paper distribution channels (the constraint of a scarcity of space in the printed form, peer-review by a few anonymous individuals, access by subscription, research libraries have deep-pockets), will fall into the trap of marketing myopia (Theodore Levitt’s term [Harvard Business Review 1960]). Levitt asked the question, “What business are you in?” Each university press, academic organization press, and for-profit publisher needs to find a new answer to this question.

This new answer will need to embrace some new/old ideas about how the academy works in the digital era, and where value-added services might support a new business model. Clearly, the current model has aged beyond a simple death, and into some zombie state. Curiously, the original (350 year-old) model for the Philosophical Transactions of the Royal Society may closely resemble what the next phase of academic publishing will look like. Henry Oldenburg, the first Secretary of the Royal Society, took on the task of gathering and publishing all of the letters sent to the Society. Publication was designed to be as rapid as possible and the value of each letter was to be determined by the readers.

The living-dead publisher perspective (propelled by marketing myopia) is evident in the response that various publishers’ associations made to a study sponsored by JISC on the economics of various alternative forms of scientific publication. One of these forms, as modeled on the ArXiv project (http://arxiv.org/) would reduce overall cost of publication by about eighty percent (including peer review and copy-editing), while making new findings available almost immediately. Here is a link to the report, the publisher’s response, and JISC’s response to the response (pdf): http://bit.ly/fGJQI0 .

Conversations are occurring at government agencies that want to increase the impacts of the research they fund, at universities that want to extend access to the knowledge they produce, and at conclaves where the digital future of academic publishing is being informed by Web 2.0 capabilities and a host of supporting technologies. These conversations will lead to new directions in federal and foundation funding, new digital objects that link science back to its data, new, community-based standards for sharing data and research, and new academic communities (virtual organizations) where socially networked colleagues will not simply review research archives (and data), but also add value to these objects, build their reputations, support academic career paths, encourage innovation, and attract new students into the academy.

To avoid becoming the “buggy-whip manufacturers” of the 21st Century, academic publishers need to be a part of these conversations; and not simply to defend their zombie business models. Cost savings are opportunities for profit growth. But the future will no longer be hostage to the print model, and the bargain between the academy and publishers will no longer capture research behind subscription pay walls.

Photo Credit: Stathis Stavrianos on Flickr (cc licensed) http://www.flickr.com/photos/stathis1980/4133296950/

Democracy First: effective governance to grow an active social network

Any social network service is much more than its code, its content, and its communities. It is all of these within a dynamic framework of rights and roles, and of needs and opportunities. All of these opportunities will flourish best if they are built on a thoughtful system of best practices and clear rules.

Your social network platform represents the various groups that must come together to build, maintain, support, and use the service. These groups include teams of developers/administrators, some number of key sponsors/funders, member organizations, and member individuals. Each of these groups has interests that you want to fulfill.

These interests can be identified by the issues they engender: funding, community (leadership, reputation), privacy (policy creation), sharing (licensing), branding, technology (features and standards), and policing (boundary control for content and bad behavior). All of these issues (funding excepted) can be addressed over time through the right kind of governance system. This system is the garden where communities can grow.

Too often, software services (even currently successful ones such as Facebook and Wikipedia) paid too little attention to governance in their infancy. This failure has long-ranging consequences, some of which are now becoming evident in these early Web 2.0 experiments. Best practices suggest that governance needs to be considered up-front, at the same time as software design.

One of these best practices is to get the your members involved in devising (and then owning) the governance system. So the plan is to first create a starting point: defining membership within the network, and then facilitate the members to create the system.

The goal is to build a nimble system that rewards sponsors for their support, enables open-source software development, encourages organizations to add their members, and gives each member not simply a voice, but a say in how their network runs.

Photo Credit: CC license on Flickr by undersound

#3 All Hands e-Science meeting at Oxford University

Software as a Service and Software as a science: keynote by Tony Hoare (Microsoft scientist from Cambridge).

http://research.microsoft.com/en-us/people/thoare/

The e-Science effort in the UK was to ensure that digital information technologies would have as great an impact on the practice of science as it was having in telecommunications, entertainment, and other aspects of society.

In the human genome project, the people who were funded did not promise to cure a single patient in the first 15 years. The notion was that the overall knowledge gain was so significant that future advances in medical knowledge would ensue. In the same way, the growth of digital tools in science will not necessarily pay-off in the short term, but will build, over time, those new tools that will move science to a new level of capability.

The computer engineers that are engaged in e-science research are not just of service to “real scientists” but are also engaged in a real engineering science. And so Professor Hoare argues that the software products are not just a service to others but also the outcome of a science as “real” as chemistry or physics.

Having browsed the booths and the breakouts, I can say that the entire meeting, 600 people talking and listening for 5 days, rolls on three wheels: high performance computing (and pooled data storage), and the means to distribute this  capability for scientists in multiple locations; science tools and services built on top of this data/computing network; and collaboration practices that promote and manage a range of sharing from data sharing, to shared experiments, to the (open access) publication of results.

The engineering of the HPC infrastructure and the building of the services on top of these are not the real transformative levers of e-science. They mostly add efficiency and distribute resources more widely, so that science does not need to happen in a few concentrated locations (research labs at selected universities and corporate locations). This distribution of effort extends regionally, and eventually, globally. But this capability and the tools that allow its use replace similar tools that scientists at selected universities already use.

The promise of new collaboration practices is where e-science has the potential to transform science in ways that are both intended and unintended. Last evening after dining at “high-table” at Christchurch College, I had a spirited conversation with a fellow on the phenomenon of Wikipedia. He was astonished by the amount of trust that users had in the quality of Wikipedia. I countered that the main value of Wikipedia was its ability to cover an amazing number of topics, far more than any previous encyclopedia. The real value of Wikipedia was its range, I proposed. This value was achieved the only way possible: by reinventing the role of the author/editor. Similarly, e-science will gain its promise only when it reinvents what it means to do science; who can do it; how it’s reviewed; where its published; how it’s used. Very little of this promise will simply grow from improvements in HPC and tools. Much of this will emerge as new users and new collaborative opportunities arise.

Photo Credit: NASA Earth Observatory

#2 All Hands e-Science meeting at Oxford University

Tom Rodden is looking at the history of e-Science, moving from infrastructure to collaborative tools (e.g., MyExperiment). After all the digital world is in the foreground of their lives. 1.5 billion Internet users in 2010. The more that our lives are performed on digital platforms, the larger footprint we leave.  Google uses this footprint to target advertising. The next stop is uniquitous computing lifestyle.Hew then do we build a contextual footprint as a conscious activity. Computers will be able to sense human activities and use this sense to enable new forms of interaction.

Some gathered quotes:

“Half the world’s people have never made a phone call: 1990s.”

“Half the World will use a Mobile phone by 2010.”

“By year end 2012, physical sensors will create 20 percent of non-video internet traffic.” (Gartner group).

Mobile phone use becomes a means of credit rating in countries with little credit history. Tom looks at the technology of amusement parks, where research is creating “fear sensors” that help park rides maintain an optimal amount of terror for each customer. Digital location services will help people find and share transportation services in real time. When DARPA released 10 red balloons, the main challenge was to create the reward system to get enough people to work together.

Crowd sourcing: ReCaptcha and the search for Steve Fossett are examples of crowds enlisted for a common good. These are just the beginning of public engagement in digital crowd activities. As we become ever more embedded in digital activities, we need to remember: “What matters is not technology itself, but its relationship to us”

Mark Weiser and John Seely Brown (1996).

Rodden is wary of the imbalance of knowledge/power when digital services can collect an ever widening swath of information about our human endeavors. How do we track this information flow? How do we resist?

Photo Credit: NASA Earth Observatory

This is cyberSocialstructure: discussions about Virtual Democracy

Anti Iraq war demonstration
Anti Iraq war demonstration

Cybersocialstructure.org will be moving to a Drupal-based website later. In the meanwhile, the discussion about how much democracy is needing for your Virtual Organization (VO) can continue.

CyberSOCIALstructure is destined to be a space where many people add their voices to discussions about the role that social practices (and theories) play in creating and sustaining cyberinfrastructure and CI organizations.
CONTACT: Bruce Caron   bruceATnmri.org
New Media Research Institute, Santa Barbara, CA

CyberSOCIALstructure (CS) looks at the social issues implicit in cyberINFRAstructure (CI). This discussion reverses the usual conversation about the impacts of the Internet on global politics and eGovernment. Instead, CS looks at community and governance as necessary social aspects of building and sustaining VOs.
PHOTO: http://www.stopwar.org.uk/photos/iraq27demo03_05.jpg