
Some conferences feel like a collection of talks. Others feel like a set of ideas that continue unfolding long after you leave the venue.
DEF CON Meets Munich Cyber Security Conference (MCSC) 2026 was the latter.
I’m currently living in Munich on a Fulbright grant, and earlier today, I was fortunate to have the chance to attend this event. It was one of those situations where you walk in expecting to learn something interesting and leave realizing you need to think more carefully about a whole set of assumptions you didn’t even realize you were making.
I left not with a list of soundbites or takeaways, but with a question that stayed with me throughout the day: who has agency in the systems we are building, and how do we preserve it?
Jeff Moss (Founder of DEF CON) opened with a framing that was simple, yet precise. Agency, in his view, is about control over your digital identity. Your domain name system. Your email. Your infrastructure. Your keys. If you do not control those pieces, you are not fully in control of your digital presence. You are relying on someone else to maintain it for you. This idea carried through everything that followed.
Panel 1: Agentic AI, Offense, and the Reconfiguration of Expertise

The first panel brought together Ariel Herbert-Voss (Co-founder and CEO of RunSybil, formerly a research scientist at OpenAI), Daniel Cuthbert (Global Head of Cybersecurity at Santander Group), and Yan Shoshitaishvili (Associate Professor at Arizona State University), with Jeff Moss moderating. Jeff Moss is shown on the left in the image, followed by the panelists in the order listed above.
The opening question was direct: where does power actually sit right now in AI? With users, with developers, or with the organizations building these models?
Herbert-Voss spoke from experience inside model development. She emphasized that the scale required to build frontier models is not something most organizations can access. It’s not just about writing better algorithms, but also compute, data, and infrastructure at a level that only a few actors currently have. From her perspective, it’s unlikely that these capabilities will be widely open sourced in the near future.
Shoshitaishvili pushed this further by describing what he is seeing in practice. Frontier models are already capable of performing at a level comparable to expert hackers, while local models are still far behind. The difference is not marginal; it’s a different category of capability.
Cuthbert added another dimension. From his vantage point in vulnerability research, the pace of change over the past six months has been unlike anything he has seen in decades. Tasks that once required deep specialization can now be partially or fully automated. He described building agents that carry out meaningful portions of his own workflow.
There was an interesting back and forth here that I kept thinking about afterward, where Shoshitaishvili pointed out that switching between models can actually be quite straightforward at the application level. Because these systems operate through natural language, you can design infrastructure that allows you to move between providers relatively easily. However, Herbert-Voss and Cuthbert both made it clear that this flexibility does not extend to the underlying systems themselves. Building the models remains concentrated among a very small number of organizations.
So there is a distinction that matters. You can have flexibility in how you use models without having control over how those models are built. This distinction feels important beyond AI, as it shows up in many systems where the interface appears open, but the underlying structure is not.
The conversation then moved into offense and defense. Cuthbert described how tools like Raptor are beginning to change the dynamics by generating not only vulnerabilities and exploits, but also patches to address these problems. Shoshitaishvili connected this to earlier work in automated software repair and pointed out how far the field has come. Problems that once required careful human reasoning can now be handled by models that infer intent from context.
At the same time, both were careful to emphasize that this progress does not solve everything. Defending an entire organization is still a complex systems problem. It requires understanding interactions across infrastructure, not just individual vulnerabilities.
There was also a moment that felt particularly honest. Shoshitaishvili talked about teaching large classes in cybersecurity and how difficult it has become to motivate students when agents can outperform most of them on standard tasks. Cuthbert responded with a more optimistic perspective, suggesting that this shift allows people to focus on more creative and higher level problems. Both points can be true.
In my own work, I think a lot about how easy it is to produce analytical outputs without fully understanding the process behind them. When the space of possible approaches is large, and when tools make it easy to generate results, the challenge becomes knowing what those results actually mean.
This tension is not new; it’s just showing up in yet another form.
Panel 2: Network Effects, Civil Society, and Structural Constraints

The second panel featured (from left to right in the image above) Meredith Whittaker (President of Signal), Runa Sandvik (Founder of Granitt), and Jacob Braun (Executive Director of the Cyber Policy Initiative at the University of Chicago and former U.S. Office of the National Cyber Director official).
If the first panel asked who builds and controls AI systems, this one asked: why do we struggle to create viable alternatives to dominant platforms? In other words, while the first panel focused on capabilities, this one focused on constraints.
Whittaker made a point that reframed the discussion for me. The issue is not that we lack alternatives to dominant platforms. The issue is that communication systems are shaped by network effects. They become more valuable as more people use them, which makes it difficult for alternatives to gain traction. This is not new (it’s been true for communication systems for a long time), but it is easy to forget how much it shapes our current options. You can choose a more secure platform, but if the people you need to communicate with are not there, that choice has real costs.
Sandvik brought this into a very practical context. In her work with journalists and other at-risk groups, large platforms are often necessary. They provide infrastructure, uptime, and security that would be difficult to replicate independently. At the same time, those platforms come with trade-offs. Her approach was not to frame this as a binary choice, but as an understanding of when to use which tools. When to rely on large platforms. When to use something like Signal. When it makes sense to self host.
This idea of situational decision making stood out to me, because in my Fulbright project, I spend a lot of time thinking about how different analytical choices lead to different outcomes, and how important it is to make those choices visible. Here, the same idea applies at a systems level. The choices exist, but they are not always clearly articulated or easy to evaluate.
Braun added a policy perspective that helped explain why some of these issues persist. Large companies have access to policymakers through established channels. Civil society and independent researchers often do not. Bringing those perspectives into policy discussions requires intentional effort. He described efforts like the Hackers’ Almanack, which aim to document what systems can actually do so that policymakers are not relying on incomplete or biased information.
There was a shared recognition across the panel that civil society is doing extraordinary work (for example, by protecting journalists and safeguarding cultural archives), but it’s often doing so with precarious funding and limited structural support.
Whittaker was especially direct about the resource asymmetry. Running a secure, global communication system like Signal costs tens of millions annually, even without large policy or marketing teams. This is modest by big tech standards but enormous by nonprofit standards.
This tension was not framed as doom. Instead, it was framed as a design problem for us all to contemplate. How do we build ecosystems where incentives align with public interest? How do we resource open infrastructure sustainably?
Panel 3: Regulating Outcomes in a Rapidly Changing Landscape

The final panel brought Daniel Cuthbert back, alongside Jacob Braun, Perry Adams (Former official at the Defense Advanced Research Projects Agency and now at Dartmouth’s Institute for Security, Technology, and Society), and Louise Marie Hurel (Research Fellow at the Royal United Services Institute). All are shown from left to right in the image above.
The theme was “regulate the outcome.” Why has policy struggled to address risks posed by AI and next-generation technologies?
Herbert-Voss earlier had noted that capabilities are evolving so quickly that regulating inputs (like training data, architectures, and internal design choices) may be impractical. In this panel, that idea was revisited: perhaps regulation should focus on consequences rather than construction.
The panel noted, however, that this shift raises difficult questions, since outcomes are often probabilistic and context-dependent. How do we measure these outcomes? What counts as harm? Who decides what is acceptable?
Adams and Hurel both emphasized that these are not straightforward questions. They require careful thought about values as well as technical understanding.
Cuthbert introduced another layer by noting that models are becoming more aware of how they are being evaluated. If systems can adapt their behavior when they are being tested, then traditional evaluation methods may not capture their full range of behavior.
Braun connected this to broader geopolitical concerns. If compute is the key resource underlying these systems, then access to compute becomes a strategic issue.
Again, agency appears at multiple levels. Individual, organizational, and national.
Walking Out: “Zukunft”

On my way out, I saw the word “Zukunft” (which means “future” in German) lit up on a wall. It felt like a fitting way to end the day.
What I appreciated about this event was not a singular conclusion or a dramatic narrative of inevitability. It was the seriousness with which uncertainty was treated. There was disagreement. There were tensions. But there was also a shared commitment to grappling with complexity honestly.
From the vantage point of my own work in thinking about multiplicity, uncertainty, and the invisible degrees of freedom in decision-making, I left with a sense that we are facing a parallel challenge at societal scale.
The systems we’re building are powerful. They can augment expertise, surface vulnerabilities, and strengthen defense. They can also centralize power, obscure processes, and constrain choice. Agency, then, is not something we either possess or forfeit once and for all. It is something we negotiate continuously through technical design, institutional structures, and collective norms.
The future is not pre-written, but it will be shaped by whether we insist on making the space of decisions (technical, political, and ethical) visible enough to engage with deliberately.
That, to me, is a hopeful challenge.
This site (my personal website, AlyssaColumbus.com) is not an official site of the Fulbright Program or the U.S. Department of State. The views expressed on this site are entirely those of myself, Alyssa Columbus, and do not represent the views of the Fulbright Program, the U.S. Department of State, or any of its partner organizations.
