Skip to content

Latest commit

 

History

History
62 lines (36 loc) · 10 KB

d-collective-intelligence.md

File metadata and controls

62 lines (36 loc) · 10 KB

Collective-intelligence

Title: Data-Driven Governance
Version: v.01
Date: 5/18/2021

Existing Research and Innovations

At Govrn, we view outcome-based donations (OBDs) as an idea that combines innovations and research that already exist in unique ways. We’re bridging connections between related ideas to try something new. Read more about Govrn and OBDs here.

It’s worth mentioning that our work exists in the broader context of a governance renaissance. So many people and organizations are rethinking the roles of institutions, governance, and civic engagement to create more equitable, transparent, and participatory systems better able to address complex problems.

There’s no way we can cover all of what is going on in the space. But we do want to highlight some amazing work by others to both set the context for the broader idea of governance innovation and to explain where some of the ideas for Govrn come from.

First up - [collective intelligence]. Next week we’re diving into data-driven governance, where we’ll introduce a few key terms and dive into how Govrn contributes to the ecosystem.

What is Collective Intelligence?

Many of us have heard of the idea of the wisdom of the crowd and a prime example of this is the classic jelly bean counting exercise.

The story goes like this: there is an unknown number of jelly beans in a glass jar, and you’re tasked with making the best possible guess. Would you rather have an above average jelly bean estimator make a guess or take the average of a bunch of people’s guesses? It turns out that the best way to consistently get the most accurate estimate is to take the average of many people’s guesses, not to rely on any one individual -- even if they are an expert, if you will. Occasionally, one individual’s guess is better than the crowd’s, but on a consistent basis, the crowd is “smarter” than any individual. This is collective intelligence.

But collective intelligence isn’t just a story, it’s actually a well-researched and heavily practiced approach in everything from public problem solving to open source movements. Next we’ll dive into some practical examples of collective intelligence in action and how the learnings relate to Govrn.

Collective Intelligence in Action

There are three frameworks that we’ve found helpful for analyzing collective intelligence examples. First is the acknowledgement that collective intelligence exists across sectors, with compelling case studies in (a) open source movements, (b) the private, nonprofit, and civic sectors, (c) academia, and (d) government.

Open source movements like Wikipedia and Python, some of the most widely used services in the world, leverage collective intelligence for massive public utility. In the private, nonprofit, and civic sectors, social impact marketplaces like InnoCentive and MIT Solve have achieved incredible success1 facilitating teams of globally sourced “solvers” who don’t necessarily have expert credentials. In academia, there’s a growing call to “break science out of the Ivory Tower” with fascinating case studies in citizen science, community-engaged research, and co-creation of knowledge with impacted communities and relevant nonprofits/government agencies. Collective intelligence exists in government, too. Examples range from participatory budgeting and participatory planning to calls for public help solving challenges to even engaging the public in lawmaking itself.

Second is a distinction around the stage of problem solving that collective intelligence addresses. Projects like Wikipedia and the Federation of American Scientists’ Ask a Scientist Project are focused around knowledge and data accessibility, which in and of itself are enormous tasks. Yet once knowledge and data is available, other collective intelligence projects are able to address additional complex societal problems beyond knowledge dissemination like the adverse effect of poverty on educational outcomes, for instance.

The third lens we’ve been thinking about is the degree of moderation and institutional centralization involved in a collective intelligence project. Is the collective intelligence pathway simply a public feedback mechanism, or are contributors directly creating content? Are contributions funneled through a moderator or centralized institution for evaluation, or can contributors directly add material to the project? For example, anyone with an IP address can directly contribute to Wikipedia, so long as they are not flagged for vandalism by a group of volunteer moderators. InnoCentive and MIT Solve have an evaluation process for proposed solutions (client-centered and expert judging centered, respectively). And in participatory planning and community-engaged lawmaking, ultimately the decision-making power still lies with government officials (at least for the moment).

How Collective Intelligence Relates to Govrn

If there’s anything we’ve learned from the case studies, it’s that there is an openness to the idea of collective intelligence across sectors, but that infrastructure is needed in order for collective intelligence to thrive. This infrastructure includes data/knowledge, coordination of people around a defined problem, vetting of solutions, and funding. Outcome-based donations (OBDs) can provide several pieces of infrastructure at once: sharing (or expression of public demand for) data/knowledge, coordination between different stakeholders around a problem, funding for the problem, and vetting of solutions by stakeholders.

Govrn’s OBD mechanism inherently leverages collective intelligence in several ways because it makes it easier for constituents, experts, and politicians to propose ideas out in the open and receive feedback from other stakeholders. This opens the door for transparent collective decision-making and community goal setting. OBDs allow constituents to bring their lived experience to the table identifying priorities and evaluating proposals for their own communities, which broadens the idea of subject matter expertise. Researchers and practitioners who may not be the most well-known/credentialed or have the “right” connections are able to make proposals and contribute to or critique others’ proposals as well. And lastly, Govrn makes it easier for engaged constituents to become politicians.

Now, on the note of moderation and expert credentialing, it’s worth revisiting the Wikipedia example for a moment. Interestingly, Wikipedia actually took off after an earlier failed attempt called Nupedia, where articles had to be written by credentialed experts and vetted by an editor-in-chief. This was so time-consuming that the wisdom of the crowds phenomenon was never able to kick in. This (and the idea that expertise is subjective and not brokered well by a private company) is why Govrn doesn’t want to be responsible for credentialing experts.

That said, there are several very valid arguments for the option of vetting solution proposals in the Govrn context. This is where the flexibility of an OBD comes in, with checks and balances from multiple angles. Politicians who opt in to Govrn OBDs are constrained by legality, precedent, and bureaucracy. And anyone who submits a proposal is opening themselves for rigorous questioning by other subject matter experts, politicians, practitioners, and those with lived experience.

But even without checks and balances from politicians and other experts, constituents still have a lot of vetting power. Constituents who participate in an OBD have voting rights and can choose whether they accept or reject proposals (this is client-focused, similar to InnoCentive). Or an OBD community could even vote to delegate the decision to a selected network of experts they’ve identified (this would be similar to MIT Solve or Challenge.gov).

Comments

Additional Resources

Know of any additional collective intelligence resources or case studies? Drop them here!

Challenge.gov
NYU GovLab - Collective Intelligence Case Studies
How Collective Intelligence can Change Our World
Nesta's Collective Intelligence Design Playbook
Georgetown's CSET Forecasting Project
Collective Intelligence & Modding
ACM Collective Intelligence Series

Footnotes

1: Solutions on InnoCentive have a 78% success rate and are identified 4x faster and cost 10x less than traditional methods. It’s no wonder that companies like NASA and AstraZeneca use the platform.