A contributor license agreement, or CLA, usually (but not always) includes an important clause: a copyright assignment.
This is a strategy employed by commercial companies with one purpose only: to place a rug under the project, so that they can pull at the first sign of a bad quarter. This strategy exists to subvert the open source social contract. These companies wish to enjoy the market appeal of open source and the free labor of their community to improve their product, but do not want to secure these contributors any rights over their work.
List of some companies that use CLAs: https://en.wikipedia.org/wiki/Contributor_License_Agreement#Users
First link gives an SSL warning for me. Here’s the WayBackMachine link https://web.archive.org/web/20240425192244/https://drewdevault.com/2023/07/04/Dont-sign-a-CLA-2.html
Maybe your browser doesn’t like Let’s Encrypt certificates.
No issues here whatsoever with other sites, including those that use L.E. certs. Weird. I guess it is a Tor circuit related issue. On another browser without Tor it loads fine.
I clicked it now and it’s fine. I think sometimes LE certs haven’t renewed yet in their time zone of origin, even tho in the viewers time zone it is expired. As far as I can tell, ssl certs don’t have time zone fields for their dates, which seems kinds weird.
Right. Thanks for sharing that thought. The weird thing was that the warning was about the domain name not being listed except for two other domains including one wildcard. Maybe it has to do with the fact the Drew had to deal with severe DDOS attacks some time ago, and had to move and rebuild services which may have included their blog. I had this with Tor browser, and then tested with LibreWolf configured for Tor and got the same error (I didn’t check the details of the two Tor circuits) I made a screenshot of the warning.
The fundamental problem I see here is the cloud. We were supposed to have easy self hosted applications and data on cheap always-online hardware. Instead, companies promoted cloud services with the intention of rent seeking through subscriptions.
If you look at the software that went from open source to source available, you’ll notice that almost all of them are cloud applications. Why? The companies that created them were hoping to make money through the same subscription model. But then, big cloud players like AWS just outcompeted them using their own software.
Would this have happened if the FOSS ecosystem neglected the cloud hype and gravitated towards self hosting? Perhaps. But not as badly. We still haven’t seen enough progress towards self hosting. It’s still very hard for regular folks. Genuine efforts like sandstorm didn’t find enough momentum. I hope this changes at least now.
I see more and more businesses realizing the trade-off they’re actually doing when they give in and get cloud services instead of hosting your own hardware and hiring your own team.
Yes, the up-front overhead cost is much larger, but there’s been a lot of fuckups at these big companies recently and a lot more valid reasons for companies to be justifying bringing everything back in-house instead of living in the cloud.
Part of that is the growth of open source cloud applications, so you can still have it all “in the cloud” but that cloud is still owned, operated, and administered by your own team, in a physical location you can access. A lot of the same benefits just with more technical overhead.
The “cloud” was never supposed to be a service for businesses. It was what was offered to consumers, so they could access the kinds of things businesses have, like internet-hosted storage. But the marketing worked too well on middle managers.
How do you suggest self-hosting should be made easier while at the same time making offering the same software as a service harder?
Basically the bits that are hard for people about self-hosting is that you need to learn about concepts you have no previous experience with. This can only be mitigated by abstracting those concepts away to some degree and automating more but those are exactly the same steps that make offering the software as a service easier too.
I have to disagree with both those assertions.
If a software is easy to self host, then there is no need to make it harder to deploy as SaaS. The latter will be irrelevant for most people.
And the problem of self hosting isn’t a circular problem as you project it to be. There are architectural changes that can make it positively easier to self host without exposing the sysadmin to needless complexity. The example I quoted before - sandstorm - was a step in this direction. Deploying and administering applications on sandstorm would have been as easy as deploying one on desktop (including cross app integrations). The change needed was to modify the app to work with the sandstorm platform. Unfortunately, the platform didn’t gain the momentum needed to ensure that all available apps would be ported. But it shows that the concept is viable.
Self-hosting has some inherent downsides that will never make it so easy that it completely outclasses the competition by cheap mass-hosters hosting the exact same thing for most people. You or I might enjoy fiddling with technical things or keeping our computers running 24/7 or setting up some workaround CGNAT or DualStack Lite so that computer is even reachable from the outside (or even have a physically stationary computer in the first place, lots of people carry laptops around instead or don’t even have anything but a phone) but most people would much rather pay someone US$1 a month to do all that stuff for them.
My sincere belief is that the difficulty in self hosting is due to the lack of priority, investment and development, due to the perverse incentives of the SaaS model. I don’t think it’s a technical problem that cannot be resolved with sufficient work. There are PoCs that prove that it can be made as simple as desktops and mobile phones.
I’m surprised that other people are surprised that for-profit companies constantly try to increase their profits; such companies only contribute to FOSS when that’s more profitable than the alternative. The Linux kernel, AMDGPU, Steam, etc only exist because some part of the software/hardware stack is proprietary (which becomes a more attractive product as the FOSS portion of the stack improves).
I’m definitely not justifying the “rug-pulling”, but people need to stop supporting projects with no potential for long-term profitability unless those projects can survive without any support from for-profit companies. Anything else is destined to fail.
What part of the Linux kernel is proprietary? genuinely curious.
While Linux itself isn’t proprietary, it supports loading proprietary firmware/microcode blobs and running on proprietary hardware. Thus, part of the Linux hardware/software stack is proprietary.
I believe they mean that vendors support the FOSS since it’s economically advantageous for them to do so (usually bc implementing an alternative is not economically viable). The proprietary part finances their contributions to Foss, which is usually the platform that they run on top of.
There is a more detailed explanation on the economics of Foss here, for instance: https://hachyderm.io/@anthrocypher/112315622785685958, but as I understand it it’s a common good that companies try to build on top of (and in some/most cases supplant with their own proprietary versions).
But yeah, I’d love to see the OP’s thoughts on this.
I just replied to the other person’s comment.
people need to stop supporting projects with no potential for long-term profitability unless those projects can survive without any support from for-profit companies.
You see the contradiction here right?
I don’t. Could you elaborate?
Open source projects have no potential for long term profitability unless those projects get support from for profit companies, thus compromising the nature of open source.
Not all FOSS projects need to be profitable to survive. IOW if a project cannot survive without being profitable and it cannot be profitable long-term, then it cannot survive long-term.
Corporate
Open source
Pick one. Capitalism cannot abide anything not being commodified. “Corporate open source” is an inherently contradictory term.
I’m not here to defend capitalism, only to say that capitalism and open source have had a more complicated relationship than that.
The Apache HTTP Server was the preeminent dot-com era open source project. It’s hard to imagine the dot-com boom without it. People seem to forget that it was corporate open source. It was “a patchy server” developed (from NCSA HTTPd) and maintained largely by internet startups like Organic, Inc. Many other critical components of the dot-com tech stack were similarly developed.
The project is jointly managed by a group of volunteers located around the world, using the Internet and the Web to communicate, plan, and develop the server and its related documentation.
This is what I mean though. Most groundbreaking development is done voluntarily or with public funds. It is antithetical to capitalism.
Capital comes in AFTER it is proven useful and/or profitable.
Open Source has been historically tied to corporations. It kicked off with Netscape opening their browser. Eric S Raymond was a major player behind the term, and he’s explicitly right-Libertarian.
Free Software, OTOH, is a different matter. I think the two are overdue for a divorce.
I happen to like the term FOSS and would like to keep it around. It’s catchy.
Definitely time to kick out the corporatists though.
No, open source has been historically exploited by corporations.
By design, yes.
Great video. I actually bought the domain opensource.rip a few weeks ago, just to list the affected projects and explain exactly what jeff geerling did here. Haven’t started it yet, and I’m mostly commenting just to make myself commit to the idea.
Intending to create a static site with Zola, lmk if you wanna contribute. Submitting information like I asked for in the following post would already help me out :)
Solid domain, hope this works out. Even better if you have a list of maintained forks as alternatives.
Sentry also did this by embracing the Business Source License. Technically, you can still get an MIT-licensed version, but it has to be more than two years old.
As a former employee that worked there during the days that Sentry really promoted itself being Open Source, it was disappointing to see. VC Funding and a growth obsession basically poisoned the well.
I think this article does a better job at explaining the issue: https://opensource.net/why-single-vendor-is-the-new-proprietary/
CLA are a symptom of the overall problem and many corporate run projects are affected even if they don’t have a CLA (but are selective in what they merge).
A major caveat I’ve noticed some people misunderstand: it’s corporate CLAs that are problematic. The Apache Foundation also requires contributors sign a CLA, but it’s to provide a legal fail safe and a way to update to say Apache 3.0 if need be one day. Apache’s non profit, open source mission aligns with respecting the rights of contributors and the community. Corporations, on the other hand, not so much.
The way I see it is that we don’t know the content of Apache 3.0, nor have a vote to chose what license they adapt in the end. Does Apache have a good track record? Yes, but it is getting diffcult to put trust in sonething today. It’s still a rug under, or fail safe as you name it, which is used by corprates today. I would rather have a framework/procedure in place preventing it from happening from the get go.
ADDITION: I haven’t read Apache’s CLA yet so it might or might not contains copyright grant clause.
Corporate open source isn’t “dead.” This is fear-mongering nonsense based on recency bias.
I just found out some softwares around infrastructures also uses CLA, including:
- Kubernetes (hosted by CNCF)
- Istio (hosted by CNCF)
- Grafana
- All projects under Apache Software Foundation (e.g. HTTP server)
- OpenStack (hosted by OpenInfra)
To my surprise, even Golang core uses CLA too.
EDIT: Add more to the list
EDIT 2: Envoy Proxy also hosted by CNCF uses DCO instead of CLA. Interesting.
It looks like very difficult to bulid an infra without some components uses CLA.
Aren’t K8s and Go fundamentally Google projects?
Golang I believe yes but aint k8s are now belongs to CNCF where multiple companies behind?
Hmm yep, you’re right. Wasn’t aware of this, funny.
All CLAs aren’t created equal, IMHO. I ain’t a lawyer, but looks to me like K8s’s grants the CNCF a license to the use and patent your code, but you remain the copyright owner. As far as these things go, this one doesn’t look that terrible, at first glance. Or, at least, I’ve seen worse.
From that company you love to hate:
Apple Releases Open Source AI Models That Run On-Device
Believe it or not, Apple has been, historically, one of the largest and consistent contributors to a number of open source projects for decades. Yes, Apple does a lot of… problematic stuff and deserves a lot of the criticism it receives, but, to their credit, they do support the FOSS community in many ways, and always have, especially by open-sourcing many of their own technologies, albeit often quietly.
There’s WebKit, HTML, OpenGL (to which they were to primary contributors for almost a decade in the mid 00s-10s). They also are pretty much the only ones who decide new emojis and always have been since the other 5 or 6 UNICODE emoji board members don’t care to contribute.
Edit:
a list of current FOSS projects at Apple
Apple at GitHub:
More repos of older projects:
I had a quick peek into Swift and FoundationDB and both doesn’t have CLA or DCO, interesting move by Apple who usually makes anti-consumer decisions.
Here is an alternative Piped link(s):
https://piped.video/watch?v=hNcBk6cwim8
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.