The MIT-Pay Me Or Get The Fork Out License
Open source was never intended to serve as an all‑you‑can‑eat buffet for industrial AI. AI has arguably made it effortless to clone code and remix. Even the recent shift (in part due to GenAI) of repo owner moving licenses to “source available” doesn't really without people to reimplement something through AI assisted coding with almost zero friction. That ease is brilliant for hobbyists, but for maintainers it risks being demoralizing when their work is strip mined without recognition, support, or a return of knowledge and features to the original repo.
A bit of context: permissive licenses like MIT or Apache were written in a world where the consumer of your code was another human developer, not an automated crawler vacuuming up half of GitHub to feed a model. IMO that mismatch matters. Maintainers carry the costs while others capture the upside at potential industrial scale.
So here is an idea. What if platforms like GitHub added a simple toggle? By default, AI crawlers cannot simply slurp up your repo. Not saying that is easy to implement (it's bloody hard to recognize and block crawlers), but challenges are there to be met. But if you are okay with it, then you flip the switch and maybe even set a crawl fee once your project hits a certain popularity threshold. The platform handles the plumbing, and maintainers get to choose. Human devs still enjoy the freedoms of open source, while AI scrapers face a new rule: pay me or get the fork out. This would not make it impossible for GenAI to create copycats of your code, but at least you as a maintainer can decide: either AI operators cannot train on your work to get better at replicating yours and others' work, or if you do allow it, then you should damn well have the option to be properly compensated for it.
This is probably not something you can do under the default MIT or Apache license. Which is why we may need something like a MIT‑PMOGTFO License. While, sure, this is a bit tongue in cheek, but nonetheless it's an interesting thought experiment on how open source could(/should?) evolve when automation changes the stakes. Collaboration remains welcome, extraction might not.
This is not an AI vs open source debate (I’m actually quite optimistic about AI myself, sometimes to the frustration of people that hear me talking about it over at the Monkey Patching Podcast), but it is definitely about how we ensure sustainable contribution to the commons. And if the commons is going to stay healthy going forward, maybe it is time to start experimenting with switches, settings, and yes, a little PMOGTFO.