I dig it. Here are some discussion topics that came to mind. I’ll offer my thoughts after I spend some time pondering.
How do we determine how many projects to allow into the process, and how do we determine the max number of projects to approve to start work? As long as there is a team, and the DAO agrees it’s important, that’s fine? Or do we cap at a specific number of projects per season?
The reputation work that @0xJustice, @AboveAverageJoe@saulthorin are working on could play an important role in the maturity of this process. One way I see that happening is the people who have the credentials/rep around the specific subject of the project could have a higher weighted vote than someone far removed. Similarly, those who have higher reps overall, regardless of their areas of interest, (all L2s for example) have a higher weighted vote. That way we make sure the vote result is truly representative of the part of the community that is qualified, invested, active, etc.
I’m sure I’ll think of more comments and questions. I’ll add as they come
My direction here is to move on the previously identified problem of the inability of the DAO to say “no” to projects.
This concept, in part addresses this, without emotion or subjectivity.
The available projects can be ranked, the number of projects I think is immaterial. The ranking proposed for adoption and then followed until the next re-ranking.
Reputation is a valuable tool for voting, and influencing outcomes of votes. If this was a proposal, then reputation could play a factor in determining the outcome .
As we approach the introduction of reputation we might be cautious about how to between inclusiveness of new ideas and the application of reputation, in this case, “open for all” submissions invites new projects and allows newer members to jump right in, if their project is ranked high enough.
It would be great if we could ideate a bit on a system to determine funding; One example is more like an incubator - first funding is for a limited amount in exchange for a stake in the project. If it hits the milestone or other criteria is can elevate into a higher funding envelope, etc… This pushes the conversation somewhat a difference between projects and bounties which is another topic that has come up for resolution.
A different example of the funding process may be an intermediate step something like a “deal score”, with different funding levels for different scores where step 1 is the priority of the projects, step 2 to is the deal score A, B, C, F. and step 3 might be independently setting the total funding envelope for projects . Along the journey projects will fall off through natural selection - a form of “NO” at each step.
The ‘number’ of projects being proposed, at that point of time, too does matter…
If there are too many to be decided upon, discoverability can be issue, and any project’s fate would totally depend on marketability of the proposers, rather than ‘strength/proposal’. That has a danger of skewing the scores !!
This makes a lot of sense.
You mean to say that even the old/existing projects have to compete with the new ones? I hope not, lest it can complicate, rather than simplifying the situation.
While the running projects to be rerated, i presume, they should be kept in a different basket, rather than make them stand up with the new ones.
I guess, this is fair & straight forward approach, until or unless there’s a better one.
How do you define priority, different from ‘score’? Request to elaborate on this please.
Scaling the number of projects can be done by assigning a fixed number of grants to review (e.g. each application/team must review 3 others using a rubric. Just make the list of who’s doing what transparent. Selection could either be first come first serve (potentially allows people to match thier expertise) or randomly assigned. Note: random assignment can be restricted by applicants categorizing their projects when they submit so they’re more likely to be matched with reviewers who have relevant expertise.
1a.Tbh this sounds very similar to a scientific peer-review process. Too much relevant tech to survey here but the parallels are worth considering IMO.
1b. Projects should be categorized based on how many times they’ve received BDAO funding previously. Example: Ocean DAO uses the following tiers that seem to work well: First time applications have a small cap, second & third time apps go up, then around the fifth time they apply for & receive an application we enable a max ceiling. A maximum number of lifetime grant awards for each project may also want to be considered.
2. Reputation weighted voting. I’m very hesitant for a few reasons. First, all those tools are super new and we still haven’t even fully settled debates about best practices with token weighted voting. Any implementation would have to go low and slow with thorough monitoring & impact analysis. See this recent insightful post from SourceCred core contributor Seth Benton about the myriad potential ways a reputation/contribution incentive & voting system can interface with community culture for better or worse.
2a. Token weighted voting to determine funding allocations. Because BDAO is a media DAO, the culture and brand identity certainly have huge roles in the value landscape created here. However, given BANK daily volume on Polygon alone is >$1K/day we are essentially overtaking the inherent value pf culture & brand. I haven’t seen the numbers but Im curious what proportion of BDAOs revenue comes from products vs services.
Products are infinitely scalable and can be set on autopilot with way lower overhead than a services model offering identical content (e.g. teaching a course in person vs maintaining a digital productized version). If most of our revenue comes from services, then our governance token’s value would primarily be based on continuing to ship amazing deliverables. In combination with an ask for part of funded projects then it feels like were almost becoming an investment DAO. Closing the loop between token holders and the ability direct where thier investment flows would seem appropriate (and could even be combined with reputation/contribution weighted voting.
3. Missing an opportunity for broader community input. Projects could still also be open to voluntary community feedback as a non-binding check on the reviewers. If the community and reviewers disagree, the project would be eligible to appeal the original reviews and request a new panel (composed of people who didn’t vote in the poll).
Thank you for this great input contributing to furthering this conversation.
These are all valuable ideas.
There is a math paper out there somewhere that shows that when peers review the selection for conference presentations the outcome was mathematically the same as when there was not peer selection but in stead the traditional committee of “academic experts” selecting and scoring the entries. The organizational savings lies in the absence of the need of gatekeeping" experts" ie “reputation” to choose for the community, less community friction when the project paper is not selected for presentation, and fewer resources consumed for the same outcome.
Also, I am a big advocate for trickle-funding proposals as opposed to funding the whole project up front.
Lastly, I still see a possibility for project priority to be considered separately. There has been no real alternative to the ability to “say no” to project funding which is the mischief at the heart of the challenge we are trying to overcome.
bDAO could adopt a fixed incremental funding envelop and cycle separate from the prioritization of projects… what do you all think ?..just moving the convo ahead