EmTechMIT, Cambridge, MA: At the MIT Media Lab, few voices blend data science, social theory, and design for human agency as fluently as Professor Alex “Sandy” Pentland. His talk at EmTechMIT, titled “The Power of Choice,” was not about the power of technology itself, but about how to realign it around the individual. Pentland offered a refreshing counterpoint to dystopian AI narratives, focusing instead on how digital “loyal agents” can restore trust, accountability, and personal sovereignty in the information economy.
“We need to empower people,” Pentland began. “Not by giving them answers or telling them what to do, but by acting more like reflection agents.” That phrase, reflection agents, became a defining thread through the talk. In Pentland’s vision, technology should hold up a mirror, not a leash.
From Mediators to Loyal Agents
Pentland and his MIT team have been running group discussion experiments that blend artificial intelligence with principles of social psychology. Their “AI mediator” listens in on discussions, synthesizes what different participants are saying, and periodically reflects those themes back to the group. “The AI says things like: ‘I hear some people saying this, and others saying that,’” he explained. “It keeps people focused on what’s actually happening in the conversation.”
The results are striking. Groups guided by this reflective agent consistently generate outcomes that participants “like much better” compared with control groups. The AI doesn’t dictate conclusions, it fosters coherence. By summarizing and visualizing patterns of thought, it helps groups recognize shared values that might otherwise get lost in the noise.
This approach, Pentland observed, also democratizes conversation. When the AI participates, “even people who have very different opinions feel included.” It shifts authority from loud voices to collective intelligence. In a culture awash in polarizing and performative discourse, such an intervention feels almost radical.
Social Media Without the Noise
Pentland’s team extended this idea into digital public spaces, what he calls “town hall systems”, designed around deliberation instead of division. The prototype resembles a feed like Twitter, except there are no followers, no influencers, and crucially, no ads. The system continuously shows users a real-time visualization of what “everybody is saying.”
The result? When participants see this broad distribution of opinion, even on divisive issues, polarization drops sharply. “Just a few minutes of exposure to what other people actually think reduces measured polarization by 30 percent,” said Pentland. People discover that most others aren’t extremists; they’re reasonable, with nuanced views that rarely surface in traditional social media. “Those loud, crazy voices you hear online, those aren’t representative,” he remarked. “Most people actually agree pretty closely.”

Data Ownership and the Birth of the Loyal Agent
From conversational AIs, Pentland turned to the idea of the “loyal agent”, a personal, private AI that acts solely in its owner’s interest. “It’s something that you own,” he explained. “It uses your data, but it doesn’t share it.” Such an agent could live on a user’s phone or laptop, helping them navigate complex systems, whether government benefits, healthcare decisions, or retail choices, without leaking personal information to corporate platforms.
This vision is not hypothetical. Pentland and collaborators, including Consumer Reports, are developing open-source frameworks that let individuals control their data while receiving AI-powered guidance. The agents operate under what Pentland calls a “standard of care”, a legal and ethical framework inspired by established professions like medicine or law. “Just like a doctor has a legal standard for representing a patient,” he said, “your digital agent should have a legal duty to act in your best interest.”
That concept, binding AI behavior through existing legal doctrine rather than new regulation, is a pragmatic stroke. “You don’t need new laws,” Pentland noted. “The laws are already there.” The partnership with Consumer Reports, a 100-year-old nonprofit advocate for consumer rights, ensures that these principles of transparency, consent, and accountability are built into the infrastructure itself.
A Post-Advertising Future
One of the most startling implications of Pentland’s framework is its potential to upend modern advertising. Because these personal agents shop, compare, and evaluate on behalf of users, they’re immune to emotional manipulation. “The agent doesn’t see smiling people in ads,” Pentland said. “It just looks at the specs, the price, and peer experiences.”
If successful, this inversion could end the need for targeted digital advertising as we know it. Markets would orient toward product merit instead of persuasion. Large swaths of the attention economy could collapse, or evolve into something more transparent.
Pentland likened this shift to the early days of industrial safety regulation. “It’s like when seat belts became standard,” he said. “At first, they were optional. Then people realized not having them killed people.” Establishing a legal and ethical “seat belt” for AI agents, he argued, may be necessary to protect individuals from systemic exploitation in digital markets.
The Smaller Company, and the Stronger Guild
Pentland also connected these ideas to broader economic structures. As digital cooperation tools and AI intermediaries reduce transaction costs, “companies become flatter and smaller,” he observed. Traditional organizations arose because they made coordination easier. In an era where collaboration across networks becomes seamless, that logic inverts.
But what happens when employment fragments into independent agents and distributed teams? Pentland cautioned that this transition threatens social safety nets built around stable employers. “If your employers go away and it’s all gig work,” he asked, “who takes care of you?” His answer borrows from history: guilds. These professional collectives, akin to cooperative networks, could provide the support and solidarity once supplied by large firms.
“We’re rebuilding something like medieval guilds,” he said, community-based institutions that anchor individuals amid economic decentralization. It’s a reminder that technology alone can’t solve structural vulnerabilities; institutions must adapt alongside it.
Measuring Human-Centric Progress
When asked how success would be measured, Pentland outlined a distinctly social scientific approach: “We’ll ask, are people saving money? Are they avoiding scams? Are benefits distributed equally across populations?” The metrics extend beyond economics to cognitive and emotional well-being. He emphasized that “mental health” and inclusivity are legitimate outcome variables for evaluating AI systems. Perhaps that’s where the “power of choice” truly lies, not only in having options, but in ensuring the systems that offer them promote human dignity and fairness.
The first field deployments of the loyal agent framework are expected by early next year, with Consumer Reports subscribers volunteering as the initial test population. “This is a purely pro-social effort,” Pentland said. “It’s open source, so we have a consortium of small start-ups using the same code, and that forces the big guys to begin to move.”
A New Social Contract for the Digital Age
Pentland’s talk at EmTechMIT pulsed with optimism grounded in realism. He acknowledged the risks of automation, privacy breaches, and corporate overreach, but offered a path that recognizes existing strengths, law, community, and cooperation, rather than chasing speculative regulation. The “loyal agent” reframes AI from a tool of control into one of empowerment.
For technologists, the takeaway is profound. AI need not replace trust, but can encode it. Systems designed to represent rather than manipulate could make digital life more human, not less. As Pentland’s work suggests, the real power of choice in the 21st century is not between one product or platform and another, it’s the choice to own your data, direct your digital identity, and participate in collective intelligence without surrendering autonomy.
In an age obsessed with artificial power, Sandy Pentland reminds us that the deepest innovation lies in reclaiming our own.
For more information, please visit the following:
Website: https://www.josephraczynski.com/
Blog: https://JTConsultingMedia.com/
Podcast: https://techsnippetstoday.buzzsprout.com
LinkedIn: https://www.linkedin.com/in/joerazz/


Leave a Reply
You must be logged in to post a comment.