Dec 9, 2025
Contributor Spotlight Interview: ~niblyx & ~bonbud - Transcript
~sarlev: This is also our first two-person spotlight interview, so we'll we'll see how it goes and adapt accordingly. I generally like to start by just asking, what first drew you into the idea that we needed to throw away and rewrite the entire networked computing stack?
~bonbud-macryg: I'll let Thomas go first if he wants.
~niblyx-malnus: So many of the ideas are just a part of who I am now that it's a little bit hard to remember the trajectory sometimes. But I studied computer science undergraduate for my undergraduate degree. I didn't really program much. I mostly did math. After that, I wandered around for a bit and decided that I didn't really want to be stuck in front of a computer for the rest of my life. I realized that I'd been interested in kind of buildings and architecture and stuff, which I sort of explored a little bit when I was in high school. So I explored that for a bit and stumbled on Christopher Alexander, who was a big influence on me. And I also realized there's really no sort of job that involves thinking or problem solving today where you're not stuck in front of a computer.
It just turns out that computers are this incredible leverage you can use in pretty much any domain to turbocharge your thinking, at least they have that potential, which is sort of unrealized in many ways, and that's frustrating. That realization set the stage for being open to something like Urbit.
I was always very concerned with what it takes to build a physical environment that kind of promotes human flourishing, allows people to be their best selves, to develop and to just kind of live a light and effortless life that just feels natural. So I absorbed everything Christopher Alexander ever wrote. I also picked up on the fact that he was really big in some software communities early on. He was actually probably more popular there than he was in the field of architecture on some level, in the way that he thought about not just fundamentally what makes a good structure, but also how structures evolve, what is the process by which you can get from nothing to a structure that works.
Then I discovered Urbit through Curtis Yarvin's writings around the dot-com era. He popped up on my radar and I realized there was this huge confluence between him, Galen [Wolfe-Pauly], Christopher Alexander and, I think, a few other people. There was a blog post on Urbit.org at the time about Christopher Alexander. There's just this incredible synchronicity, an overlap between these ways of thinking.
At the same time, Christopher Alexander kind of black-pilled me on the possibility of building anything nice in today's world. His ultimate conclusion, which maybe not a lot of people who encounter him realize, which he presents in a speech he gave to OOPS is that the architects of the current day won't be the people who are building buildings. This is because the way that they do it is completely broken, and in order to recover a way of building that actually produces living structure, as he calls it, we're going to have to leverage the modern tools that exist.
And the people who are going to be able to do that are essentially people who can use software and who are conversant in those tools and who can think like that. So I got a masters in machine learning and the idea that I could make money programming while I tried to build a few little tools on my own, including a sort of project management tool.
It was something I had tried to build on my own initially as a tool for myself, and it just became very apparent, for a bunch of reasons, it would be huge headache to build just a small tool for myself that I could use in my daily life. I just stumbled on Urbit at around the same time and it clicked. I had discovered functional programming around then as well and it aligned with a lot of Christopher Alexander stuff. From there, I got my first grant from the Urbit Foundation to build a task manager, Nested Goals, based on the idea of a hierarchical to-do list. That was pretty much how I got to Urbit.
~sarlev: How about you, Evan? Did you come by way of the architecture pipeline? Was it something else?
~bonbud-macryg: I studied graphic design. I wanted to do a kind of combination, graphic design, web development, lifestyle business. It's a very nice way to make a living if you're creative and technically-inclined. I'd dabbled in programming a little bit. The first programming language I ever touched was Basic, which is a weird thing for someone my age, because Sony shipped a Basic IDE with the PlayStation 2 demo disc in the EU. It was this regulatory exploit such that it was taxed as a personal computer and not a games console, and that was my introduction to programming.
Given that experience, I thought, "I've studied design formally. Let's try and teach myself some JavaScript and get into web development." I learned JavaScript over one summer. As I was kind of getting at the later stages of that, of actually using jQuery, deploying a website, I started to see the seams. I started to see that this was actually all kind of held together with tape. It's slow because you use so many packages and frameworks to get anything done. That kind of black-pilled me on web development completely. I thought, "I don't want to write JavaScript for the rest of my life. I don't care how much they pay me. You can't pay me enough." So I just did design for a few years around 2016, 2017, maybe.
Around that time, I first heard of Urbit on some blog. I thought, "This is really weird but kind of interesting. I'll just follow the newsletter and see if anything interesting comes up." I think I tried to go on the website to find out more and the website was just a video of the ocean and I thought, "Okay, I'll come back later."

So I eventually did come back later sometime around 2020, 2021. I had basically become interested. It was around that time, around COVID, it became clear that the internet was real life now and the fact that the internet is bad is actually a huge civilizational-scale problem, which has implications from the way people talk to the highest levels of government, to probably impacting the structure of the human brain, and we clearly need to do something about this. So I was quite interested in Ethereum, Web3 stuff. I came into it via the Interdependence podcast which was run by Holly Herndon and Matt Dryhurst. I joined a few Discords. I just tried to see what was going on.
I basically thought, "Let me check out that Urbit thing," and at the same time, near the end of 2021, I got an email saying that Urbit was ready to build on. And I thought, "Finally, maybe I can make some little apps for myself and just host them in a very kind of simple, easy way." I got into Hoon School and then I got into the grants program and then I ended up getting a job at the Urbit Foundation.
When I was getting into Urbit, I also found a bunch of videos and blogs with Galen talking about the Christopher Alexander stuff. I watched all the Assembly 2021 era podcasts that Justin Murphy did and I thought I had a similar kind of thing as Thomas did. But an interesting thing about Christopher Alexander and software is if you watch that Oops talk that Thomas mentioned, at the end of that talk he kind of, there's this tone, he doesn't quite understand what it is about his work that software developers find interesting, but he kind of stresses the moral dimension of it. And it's interesting that for all the kind of influence Christopher Alexander has had in software, all of the software we use today is ugly, brittle, mass-manufactured, actively harmful. It has nothing to do with the wants or desires or lives of any of the people using it in a lot of ways. And if you read, if you actually read The Timeless Way of Building, I believe everyone in Silicon Valley has read A Pattern Language. I had never heard of The Timeless Way of Building, the first book in the trilogy, until very recently. If you actually read that book, he talks about how buildings, towns, architecture should be owned and run and built and decided upon by the people who use it. And if you do that, you allow these kind of, you enable these kind of specific patterns to emerge that are specific to that climate, that environment, to that culture. You enable individual houses to grow organically according to the needs of the people who live there.
And that's Urbit. I see Urbit as kind of a arts and crafts movement for software, a reaction to the kind of dark satanic mills almost. I was really taken in by this idea of the software artisan who fully owns and controls and understands this computer. He's the guy in your village who makes all the software that makes the village run that they use to run their lives and communities. And that was kind of before AI really took off for coding. Now that is more accessible in a way it wasn't even a couple years ago.
~sarlev: You guys kind of both came into the Urbit pre-mass proliferation of, let's say, "useful LLMs", but now there is this explosion of generative AI, whether image models or language models or otherwise. Where do you think this kind of leads in terms of LLMs interacting with Urbit?
~niblyx-malnus: I think Urbit, Nock, and Hoon are very well suited to LLMs, both eventually actually kind of hosting them as part of the architectures as the fundamental computation, running the computations, jetted or whatever, in some manner, but also just in terms of the legibility of the code to an LLM. I think functional programming in general is maybe easier for them to understand on some level and to work with. Basically for the same reason I found it so pleasant to work with Urbit and Hoon: you know exactly what everything is, you know the inputs, you know the outputs, and it's just easy to work with things in an isolated way.
I think that is what these LLMs need. A big part of being good at working with these LLMs is managing the context and the limited context window, which I think on some level is always going to be there and is a fundamental aspect of cognition. You can only focus on so much. It seems like a very natural fit for LLMs.
~sarlev: What exists now? I know you've kind of been experimenting in a variety of ways. What's the current state of things?
~niblyx-malnus: Currently there are basically two projects we are working on. The first is, and they all have bad names, called Clurd. I threw it together this summer when I first encountered Claude Code. As soon as I used it, I was like, "The whole thing is basically one chat where sometimes it'd send a message that it wants to use this tool or whatever. There's absolutely no reason that this couldn't just be running things directly, interacting with my Urbit. That should happen."
I tried to think of a few different ways that it could be done, but I was like, "Let's just get something stupidly simple working first. I interact with Urbit via the terminal. So maybe I can get this LLM to do the same thing."
So I basically just, I didn't code any of the Clur codebase. I basically pointed Claude Code at the %webterm codebase. It's LLM slop, but it works. Effectively, it just sends commands to your Urbit over HTTP, one character at a time. And then it just subscribes and listens for the output the same way that your Webterm does it. So it just allows these really tight feedback loops. The LLM can send a command, wait for a little bit, and then see the results of that.
I guess the intention was twofold. On the one hand thought it would be cool if an LLM could just use my Urbit. And on the other hand I had the sense that Claude Code was almost good at Hoon. It's not quite there. It was constantly making mistakes. It would generate code that didn't compile on the first try, but I figured I could teach it to be good at Hoon if I could just get it to edit this file, commit it, see the error, and then adjust, maybe I could get something going. It turns out that that's actually super effective. It does adjust pretty much immediately.
~bonbud-macryg: Yeah. I wonder if Hoon has the highest level of Claude proficiency versus existing examples of code on the internet. There is not much Hoon in the world, but Claude's gotten pretty good at it in the past year. I think 3.7 was able to write FizzBuzz in Hoon if you fed it the docs by hand. And now it, I think this summer I fully AI coded an app front end and back end in Hoon.
And these days, there is MCP integration in the docs.urbit.org site so it can look stuff up if it needs to. It's pretty good at Hoon these days. Most of the mistakes it makes in my experience are very minor, or they are strange bits of syntax that I've never seen a human write, but they do technically work. So that's interesting.
~niblyx-malnus: Yeah. It's not always very idiomatic Hoon, but it can write things that compile, and it can also understand novel unusual frameworks that, the way that I'm writing some of the stuff for the SPV wallet that we're building for Groundwire is not very typical for a Gall agent, but it kind of just picks it up right away. It's not thrown off by that. It was able to basically explain how %spider works to me without much context, just by pointing it at the %base desk.
Sometimes it will be a little bit overconfident in something which is not quite true, but it's rare that it's completely off the mark. I use that every day now to help with programming Hoon stuff, which is nice.
~sarlev: Neat. You mentioned two AI related urbit projects, what is the other?
~niblyx-malnus: We're experimenting with a kind of an MCP server, with the working name urbit-master, which is basically just the name of the repo currently. My approach is essentially to build a kind of an "everything app" of sorts, which is just me experimenting, building little tools that I want to use for myself, including trying to adapt things that I've built in the past. I haven't implemented these yet, but the idea is to tackle some of the lowest hanging fruit for a personal assistant LLM involving the coordination of your life across time essentially: my to-do list, handling time zones, recurrence rules, calendar things, etc.
I have it running live on my ship and I'm chipping away at it. Essentially, I connect Claude to the MCP server which is running on my main ship. So it can do things like send me a Telegram message to ping me as a rudimentary notification system. And then also a very simple to-do list, that mirrors what the to-do list built into Claude Code does, but store's it on my urbit and persists it across sessions. It can categorize them by like these are to-do lists that are for me versus for it engaging in a particular task or something.
There's also a ChatGPT style chat interface where you're sending API requests directly to Claude. So you have to put in your credentials and stuff, but it has access.
~sarlev: And this is running on your little chat interface, and you can say, "Hey, add a few things to my to-do list, and then while you're at it, can you make me a way to look at my inbound Ames packets?"
~niblyx-malnus: Well, you can't do that right now, but in principle, you could do that long term. At the moment, basically I can have it set items on my to-do list, schedule reminders for me. Otherwise you can just have sort of a normal LLM chat. There are a lot of opportunities there, for example, to use context more effectively.
One of the things that I find really annoying about Claude Code is this constant compacting thing. Especially after it happens a few times. I think it takes longer and longer. It can take minutes to essentially compress the chat history. There are a lot of ways you can give it tools for exploring the chat history instead of forcing it to compact and sort of carry that along with it. You could probably do some of the compaction, something equivalent to compaction asynchronously, so it doesn't slow things down. You would just have a sliding window over the chat history and then you say, and then you maybe even set a timer that every, or whatever, there's some trigger that when the sliding window has gone a certain distance, it makes another API request to effectively a separate chat that just says, "Summarize the most recent, summarize this piece," and then just dump it in the context here. Having all of your LLM chat data right at your fingertips and easy to manipulate could definitely make the experience of interacting with an LLM richer and more useful in some ways.
~bonbud-macryg: We are using this chatbot work that ~niblyx-malnus has been doing as a test bed for a more general purpose MCP server agent. When I spec'd that project out at first, I was really interested in what Urbit MCP could do that other platforms can't. The two things that appeal to me about this MCP server idea are context and capabilities. On capabilities, when I was reading the MCP docs cover to cover something that really stood out to me was the fact that you can add capabilities to an MCP server and the MCP server will update the clients about that new tool in real time.
I've never heard of anyone using it before and it's because I think they can't. It's because there is no platform where you want the user to be able to add arbitrary capabilities to the MCP server. The only place you can do that is if the user runs and owns their own server and they want to actually make it more useful to themselves and their LLM. So, the idea of the LLM recursively improving itself with Urbit over time just completely sold me.
The other thing part of my vision for this is that the MCP server tools would essentially be a list of effects the Gall agent can emit to other agents on your ship, to the vanes in your Urbit OS, and to other people's ships. These lists of effects are the one and only way that Gall interacts with everything. So if we just give the LLM access to that, that's general purpose enough to do everything you'd want to do.
The other thing that MCP gives you is something called resources. These are basically just endpoints where the LLM can grab relevant bits of data that it needs to accomplish some task. One nice thing about Urbit is that all of your apps have these scry endpoints which are basically just endpoints to which you can make reads and you can get various bits of data. It would be fairly easy to have the LLM just look at all the apps that you have installed on your ship, look at all the scry endpoints, and then build and test the requests to those endpoints, and then register them as MCP resources, which means that the LLM can just grab arbitrary bits of data from everywhere on your ship.
One interesting thing about that is that a lot of AI products right now are basically just trying to get as much context about you as possible. The whole AI browser idea is that the browser is where we'll get the most context about you for the LLM. I use one of these browsers. I use Arc right now. It's quite fun. But if I look at what it knows about me and if I look at what a Claude desktop knows about me in these memory features, it's extremely superficial. If you want an LLM that has a lot of context about you, there's this tension where you want to give it context, but you can't trust the company that runs the LLM to use that context responsibly.
If instead you had the LLM running on your own Urbit, or your own box with your Urbit OS running on that box, you would have a system that you could trust to know everything you've ever written down on your urbit and everything everyone's ever written to you on Urbit. This is already plausible now if you have a really beefy computer, but it's only going to become more and more available on smaller and smaller machines. And if you're in a few long groups, that's a lot of information. And if your organization that you work at has a Tlon Messenger group, then that's a lot of kind of very high quality stuff that's relevant to whatever problem you're trying to solve. So, that is very compelling to me and I don't think anyone can do it.
One of the interesting things about the Urbit AI story for me is that it seems like the best way to do most consumer AI use cases and it's also the most private and the most secure and the most sovereign one, which is nice. It's nice that the most effective way to do this for most people is also the way that's actually good for most people.
~sarlev: I've always been concerned, or maybe overly concerned, with how much context about me the AI companies have, but you also kind of bring up this useful point that they actually don't have enough to be truly useful by having that unique information.
~bonbud-macryg: Yes.
~sarlev: In terms of where Clurd and Urbit Master are today and where they're going, how do you think about getting that additional context? How are you thinking, "Hey, we can take this next step, both where people are willing to put more information in and where that is usefully available to the LLMs?"
~bonbud-macryg: Making that stuff available to the LLM seems fairly easy. To build a proper Urbit MCP on my Urbit server, we actually have a skeleton at this point, but getting the tools registry stuff is a solved problem and the resources stuff also seems fairly easy. We just need a dedicated Gall agent that will let the LLM interact with your Urbit in the way that we want it to and retrieve the various bits of data from your ship. All that should be quite easy.
The real problem with this is actually on the hardware side because you can store as much data on your Urbit as you want, but if you hook up Claude to that, it's still going to Anthropic servers. What you actually want is you really want the LLM running on your machine at home. This is something I know you've been thinking about quite a bit. You're probably Urbit's leading expert on this question right now. So, I'd be interested in your take on this.
~sarlev: There's definitely this interesting tention. I've been looking at this and at quasi-reasonable cost for a lot of people's annual hardware budgets, you could reasonably get some pretty good local LLMs running, but you are going to have people who are just not going to be interested in running local models. To me, there's this question of, "Can you have your Urbit be smart enough to feed the pieces?" Your Urbit has full context about you, and then it feeds the relevant pieces to the appropriate LLM. Even if you've got your amazing local AI, it's not going to be as good as Anthropic's Claude that's running on a gazillion H100s. If you're programming up a Gall agent to run on your ship, Claude can do that for you. You might not want it looking at your health data or whatever. But it's a different thing. So, do you think about that context provision in any particular way?
~bonbud-macryg: This is quite an interesting question relative to Urbit because one part of this question is RAGs, Retrieval Augmented Generation. For certain use cases, it's really useful to have this kind of vectorized database that can be searched and the search results sent along with your prompt so the LLM just automatically has a lot of context about whatever you're asking it about. But the interesting thing about Urbit is that we have the referentially transparent scry namespace, which means you can specify exactly the version of the data that you want.
Every piece of data has this scry endpoint or scry path. That path points to the exact version of the data and barring some absolutely elite hackers, you know that that data is always going to be the same every time you ask for it. So it's hard to do kind of supply chain attacks on it, which means that if you are constructing this just with data on your Urbit, you have a lot of provenance for that data already, and it would be very easy for us to implement some kind of access control on the user space side.
We actually kind of already have this with Clay. If you had a file system, if all your data was in a file system and that file system had access control on Urbit, you could add granular access control on that file system where every bit of data only letting certain Urbit IDs read it and is preventing other Urbit IDs from reading it.
The interesting thing there is your LLM could have an Urbit ID. You could just run it on a moon or something or any other ship, and then any Gall agent or any file in Clay or some future file system would be able to discriminate based on if are you asking for this data—or your LLM is asking for this data or even someone else's LLM is asking for this data—and you could get very fine-grained with that. If you had some beefy computer that was running a really great LLM, I could send queries with your LLM on my Urbit, and I would know that you can't see anything that I don't want you to. You can see my coding projects and nothing else. So yeah, there's this whole question of provenance and access control and identity that Urbit can offer so much to both developers and users around AI.
~sarlev: ~bonbud-macryg, you mentioned Claude's Hoon is getting better. There are projects for making the networking go faster. There's vere64 for storing more data in your urbit which exists as an early developer preview today.
** ~sarlev:** So if you wanted to dump tons of data into an experimental Urbit, you could. And of course ou guys are working on things on your end including getting Urbit IDs on bitcoin and a native Bitcoin SPV wallet. What improvements are you looking forward to most in Urbit, whether they're kind of directly applied to the AI problem and the LLM work that you're doing, or not? What's got you on the edge of your seat?
~bonbud-macryg: What do you think, Thomas?
~niblyx-malnus: I don't know, honestly. I probably should think more about this, but I've always been of the mind that even without being incredibly fast and storing large amounts of data, Urbit has so much potential to be kind of an orchestrator and organizer of some very fundamental things in your life that don't actually require much performance and much storage. I think that's even true with respect to LLM stuff and things that AI could leverage.
Assuming for the near-term future, anything AI heavy is going to rely on computation that happens outside of Urbit. That doesn't change the fact that just the structure of it and how it works makes it a really good way to coordinate that and to keep all of your data in a way that's incredibly simple and legible both to you and to your personal AI assistant.
** ~sarlev:** It's trustworthy. You don't need it to be fast. It doesn't need to do everything. It just needs to be a worthwhile trust point that I can put unique things into.
~niblyx-malnus: Yeah. Yeah. So I've always been kind of more excited about just getting that structure right or discovering what that structure actually is, because it's clearly latent there in Urbit, even if not fully manifested yet. I like this quote from ~hastuc-dibtux, it's about Shrubbery, but I think it applies generally, "Allowing computation at the limit of thermodynamic efficiency." I think is the right way to think about it. Just the least wasted energy, the most kind of parsimonious computation. Ultimately the more powerful Urbit gets, the more energy you can kind of power through it, but it's really about the efficient kind of allocation of whatever energy exists.
~sarlev: ~bonbud-macryg, it feels like that echoes some of your early JavaScript programming complaints. Casey Muratori talks about how all this stuff could be fast, but you're making network calls for some ungodly reason to get information that you don't need, and doing it 14 times because you used six different packages that are all talking to each other in some way that you don't understand.
~bonbud-macryg: Yeah. Npm and its consequences… Yeah.
~sarlev: You've come from the JavaScript world and it sounds like your stance is, "Kill it all." I know you've worked on various Shrubbery and Shrub-adjacent projects, as well as published your own writings on the future of userspace on Urbit. Where do you see that going? What's the vision you've got there?
~bonbud-macryg: I was part of the team working on Sky in the latter half of 2024, early 2025. I was also working on a React app with a kind of this kind of namespace file system back end. Basically what we have, if you look at the early Moron Lab blog posts, is Nock and is the namespace. As ~hastuc-dibtux says, "if you run these ideas to their logical conclusion, you have Urbit." The namespace as it exists right now already has a lot of very nice properties and we're just not using it enough. Gall is nice in many ways. It's a very simple way to build and run little apps once you've wrapped your head around it, but it's not as composable as we would like it to be, and there's a lot of properties it's just not inheriting from the namespace.
So, when I was building a Gall and React version of Sky, I was driving for it to be a prototype of a "namespace browser." I think ~migrev-dolseg's Hawk project is also driving at and that's a real thing you can use right now and it's very good.
One of the main problems I had with Urbit's existing file system capabilities is basically the interface between the client and the file system could be much more robust than it is right now. Referential transparency is nice, but in order to take advantage of that, you need to know exactly which version of the file you're asking for. And if you don't know which version of the file you're asking for and you don't know how many versions there are, you can't ask for that file in a referentially transparent way right now.
TK -- EVAN, CAN YOU REVIEW/REWRITE THIS FOR ACCURACY? Also, using Clay, our current file system, you need to know what the file type is of the file you're asking for, which is not very useful in a file browser when you're going to a path and you could be asking for anything really. So in our case with Sky, we had to, in Clay, we had to make everything just a mime file. That being a pair of the file type declaration and the kind of raw data itself that was compatible with the browser that you're already using. I then moved that file system over to Gall and that solved the file, that solved the file type problem. It solved our networking problem because we can't scry for some piece of data at its latest version if we don't know what version we're asking for. But in a Gall app, there are ways that you can do some kind of handshake where if I want the latest version of a file from your ship, my Gall app can just ask your Gall app for that as long as we're running the same app. And then we can negotiate that in a way that's very hacky but does actually work.
There is this whole permissions and access control question as well. Obviously you don't want all of your data to be public, but you do want to share it with some people. Gall's remote scry functionality does have provisions for this now. It has provisions for those but it's not trivial to implement. It should be very easy for developers and for end users to set the access granular access controls
** ~sarlev:** That's understandable.
~bonbud-macryg: This is sort of my answer to your earlier question, in that I'm very excited for vere64 because once everybody starts putting tons of content in their, and needs content distribution on Urbit, we have to solve the scry problem because the data isn't going anywhere if we can't get it.
~sarlev: You're stoked so that it force your issue which is next on the docket.
~bonbud-macryg: Yes. File systems and networking finally.
~sarlev: What's on your radar to be improved? I vaguely suspect in the AI integrations to Urbit context, there's not going to be an "end" to that for decades. But so what are your guys's kind of short-term goals at which point you'll be like, "Okay, the AI integration work is going to become a background thing"?
~bonbud-macryg: There is this X402 proposal that I believe Cloudflare and Coinbase are pushing. Today, you can receive a 402 HTTP error which says some kind of payment is required, but there is no payment rails in HTTP until this X42 proposal which adds those rails and is compatible with fiat, crypto, credit cards, AI, whatever.
That seems interesting for Urbit because one thing with remote scry, or any kind of scry over HTTP, is that you can publish stuff with just a link to the clear web. If we can have requesting of paywall content and an Urbit with a crypto wallet in it, and we have an AI in your Urbit as well, we can do Substack on Urbit, which has been a dream for a long time, but it would be much more powerful than that even, because we could also do agentic commerce on it with this kind of very rich ID system that we've already got. So, those could be interesting to explore in Q1 just throwing together some little demos. ~niblyx-malnus, what do you think?
~niblyx-malnus: Yeah, I think those are all good ideas and exciting possibilities. The way I approach this the the same as the way I've approached Urbit, thinkg about it as creating the conditions for a lot of the things that we want, whether it's remote scry or user space security or whatever. It's important to plan ahead for those things and do little experiments to see how they could be done, but almost more important to create the conditions where those things are an absolute necessity and they kind of are forced to emerge on their own. I think you're cultivating a living organism in a way.
That's my intention with the, so I guess we're kind of taking two different approaches to this LLM stuff. On the one hand, we have the MCP server, which is supposed to be general purpose, very simple way for people to add new tools that their LLM can access. And then there ismy desk, Urbit Master that's really just a sandbox for me to play around with stuff. All pointed at putting more of my life on Urbit in terms of coordinating my time, notifying me of things, communicating with others, consolidating my thoughts.
If basically my entire mind palace or something could just be my Urbit, that's what I want to move towards. The way I intend to do that is to chip away at it and then notice that I've been made more powerful in a way. The living organism that I'm cultivating is stronger and more developed, and then there are these opportunities that always just kind of show up based on that. Something has been made easier and it's right there to be grabbed. This is such a rich opportunity, an ability to do so many cool things when you have communication, payments, a semi-intelligent computing thing that has access to it. I just think the sky is the limit on some level.
~sarlev: Would you say it's timeless?
~bonbud-macryg: Which is a very Christopher Alexander way of building a computer. It is. And that's The Timeless Way of Building.
** ~sarlev:** We're getting close to the end of our time, so I've got my last question for you: If you weren't working on Urbit, what would you be doing instead?
~bonbud-macryg: Paving the world in MCP servers. I actually think I'm somewhat bearish on computer use models. Satya Nadella really wants an AI that can use Windows and it turns out Windows is quite complicated to use at this point. It's also bad. Even if it worked, at best it would be complicated at the best of times for the poor LLM. Urbit is a very simple system. We have that advantage, I think. In general, MCP is a very impressive protocol. It just gives the AI a nice structured way to see and interact with the world, but I've yet to see a computer use model that can set a Google calendar event. And I've used MCP servers for probably most of a year now to do a lot of coding projects. There is just a lot of mileage there and probably will be for quite a long time. Even if frontier model AI development stopped advancing today, we have years and years of products to build on top of this–to wire these AIs to the rest of the world and to each other.
~niblyx-malnus: I'm not sure. I think I'd probably be still wandering around trying to figure out what to do. I think at some point I wanted to just have basically a normie computer programming job and then on the side try to be Christopher Alexander's disciple and use computers to generate 3D architecture on my computer that was, where we could start the architectural revolution virtually or something. I never actually got around to doing that. But I'm pretty much all in on Urbit at this point. It's been a good way to learn about a bunch of different aspects of computer science and the world of software in general. So, it's definitely opened up a lot of other things to be interested in, too.