Ethereum co-founder Vitalik Buterin says artificial intelligence could help create more efficient decentralized governance models and enable users to make better-informed decisions.
Buterin said in an X post on Sunday that one of the main issues with democratic and decentralized modes of governance, like DAOs, is the “limits to human attention,” because of the many decisions that can require a wide range of expertise or time, which most don’t have.
“The usual solution, delegation, is disempowering. It leads to a small group of delegates controlling decision-making while their supporters, after they hit the delegate button, have no influence at all,” he said.
The average DAO participation rate is thought to be between 15% and 25%. Problems like the concentration of power and poor decision-making may result from this. In the worst situations, a rogue player may obtain enough tokens to pass a harmful proposal without other members realising, leading to governance attacks.
Buterin proposes that personal assistant large language models (LLMs) could help solve the “attention problem” by providing users with the relevant information needed for a vote.
“If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements,” he said.
“If the agent is unsure how you would vote on an issue and convinced the issue is important, then it should ask you directly, and give you all relevant context,” Buterin added.
Lane Rettig, a researcher at the Near Foundation specializing in AI and governance, told last year the non-profit was working on a similar idea: AI-powered digital twins that vote on behalf of DAO members to address low voter participation.
Another challenge in highly decentralized governance arises when key decisions depend on private or sensitive information, such as during negotiations, internal disputes, or funding choices, according to Buterin.
“Typically, organizations solve this by appointing individuals who have great power to take on those tasks,” he said.
He added that an alternative solution could be users submitting their “personal LLM into a black box, the LLM sees private info, it makes a judgment based on that, and it outputs only that judgment. You don’t see the private info, and no one else sees the contents of your personal LLM.”
“All of these approaches involve each participant making use of much more information about themselves and potentially submitting much larger-sized inputs. Hence, it becomes even more important to protect privacy,” Buterin said.
You need to login in order to Like










Leave a comment