_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
HTML Visit Hacker News on the Web
COMMENT PAGE FOR:
HTML Why Nextcloud feels slow to use
kotaKat wrote 4 hours 47 min ago:
Nextcloud is the most confusing thing I've tried to figure out how to
install as a non-Linux user (i.e. Windows admin) being told to "just
install Docker then do this".
lousken wrote 7 hours 21 min ago:
It's close to sharepoint and sharepoint takes around 15MB to download.
If you want to run something like sharepoint suite locally, this is the
best option.
Question is - do you want/need to run sharepoint locally?
I hate javascript and I have it off by default. With that being said,
this is a huge app with tons of options.
Other apps without compression are just as bad
Draw io - 25MB
Outlook - 30MB
Gmail - 30MB
The difference is, this is oss, anyone can contribute and fix it
ThouYS wrote 8 hours 15 min ago:
It simply boils down to many software developers not knowing what
they're doing
poisonborz wrote 18 hours 4 min ago:
I agree with the criticism but wonder why are there no alternatives?
Nextcloud, for what most people use it, is a rather simple,
straightforward collection of apps, yet not even those single apps have
alternatives. Eg. show me a good selfhostable web calendar, it doesn't
exist.
Why does Nextcloud, or even just parts of it, not have dozens of
alternatives?
dugite-code wrote 18 hours 41 min ago:
In my experience the bottle neck for any nextcloud install is typically
the database.
Unlike many other projects it's surprisingly easy to get in a situation
where the db is throttling due to IO issues on a single box machine.
Having the db at on a seperate drive from the storage and logging
really speeds things up.
That and setting up a lot of the background tasks like image preview
generation, redis ect properly.
janikvonrotz wrote 18 hours 51 min ago:
Most people here seem to have experienced a Nextcloud version from 3
years ago.
In version 31 the frontend has been rewritten in Vue and with Nextcloud
Office aka Collabora Online you get much more than a shitty GDocs.
Of course some apps like the calendar have not been rewritten.
Most readers do not understand what it takes to rewrite the frontend
for an entire ecosystem.
7e wrote 21 hours 3 min ago:
This is classic open source sucking because itâs built by amateurs
and not professionals. Bush league stuff.
FrostKiwi wrote 22 hours 43 min ago:
Running NixOS based Nextcloud for everything. Multiple family members
getting their including me getting their Photos and Videos
auto-uploaded and handled by NextCloud's Memories [1].
Awesome experience, but you really have to stick to the happy path.
Even with a super powerful CPU, Videoplayback was unusable, until a
dedicated AMD GPU handled the transcodes. Even though it's Fiber To the
home, sometimes Upload speeds collapse for no apparent reason and it's
unusable.
All in all impressive such a massive FOSS project runs at all.
HTML [1]: https://memories.gallery/
Vaslo wrote 23 hours 22 min ago:
My Nextcloud constantly needs updated. One day I started getting some
odd random error about the database. The advice I got was do all this
convoluted stuff to maybe get it working again. I uninstalled, and
havenât gone back. Itâs just not in the same league as paid
alternatives.
sha16 wrote 1 day ago:
It's also slow for bulk operations like "mv somefolder/ ..." - it
processes each file one at a time rather than a batch operation (I
tried it out recently and this was one thing that stuck out).
realaaa wrote 1 day ago:
great post thank you !
does anyone have some tips & tricks on how to optimise Nextcloud
installation for better performance, perhaps some server-side tweaks
can improve things a bit also?
I have one running in a small VM (4 GB ram) and it's OK for what it is,
but yeah that initial loading delay is very noticeable ..
xandrius wrote 1 day ago:
I know people here don't like it when one answers to complaints about
OSS projects with "go fix it then" but seeing the comment section here,
it's hard to not at least think it.
About 50-100 people saying that they know exactly why NC is slow,
bloated, bad, but fail to a) point out a valid alternative, b) to act
and do something about it.
I'm going to say that I love NC despite its slow performance. I own my
storage, I can do Google Drive stuff without selling my soul (aka data)
to the devil and I can go patch up stuff, since the code is open.
Is downloading lots of JS and waiting a few seconds bad? Yes. But did I
pay for any of it? No. Am I the product as a result of choosing NC?
Also no.
Having a basic file system with a dropbox alternative and being able to
go and "shop" for extensions and extra tools feels so COOL and fun. Do
I want to own my password manager? Bam, covered. Do I want to
centralise calendar, mail and kanban into one? Bam, covered.
Codebase is AGPL, installs easily and you don't need to do surgery
every new update.
I've been running it without hiccups for over 6 years now.
Would I love it to be as fast and smooth as a platform developed by an
evil tech behemoth which wants to swallow everyone's data? Of course,
am I happy NC exists? Yes!
And if you got this far, dear reader, give it a try. It's free and you
can delete it in a second but if you find something to improve and know
how, go help, it helps us all :)
nxobject wrote 13 hours 56 min ago:
I will admit: if NextCloud is competing against GSuite funding by
orgs and power users with paying subscriptions, how are we going to
expect NextCloud to have anything resembling feature parity while
maintaining âharder now but better long termâ engineering?
aeldidi wrote 19 hours 34 min ago:
Yep, this sums it up perfectly for me. I tend to stay away from the
extra stuff since the quality is hit or miss (more often hit than
miss to be fair), but really thereâs something special about having
something like it available. I think as a freely available package
Nextcloud is immensely valuable to me. I never say anything bad about
it without mentioning that in the same breath nowadays.
spencerflem wrote 1 day ago:
I think thereâs something cool possible in running the NextCloud
plugin api over Sandstormâs auth and sandboxing
aeldidi wrote 1 day ago:
Nextcloud is something I have a somewhat love-hate relationship with.
On one hand, I've used Nextcloud for ~7 years to backup and provide
access to all of my family's photos. We can look at our family pictures
and memories from any computer, and it's all private and runs mostly
without any headaches.
On the other hand, Nextcloud is so far from being something like Google
Docs, and I would never recommend it as a general replacement to
someone who can't tolerate "jank", for lack of a better word. There are
so many small papercuts you'll notice when using it as a power user.
Right off the top of my head, uploading large files is finicky, and no
amount of web server config tinkering gets it to always work; thumbnail
loading is always spotty, and it's significantly slower than it needs
to be (I'm talking orders of magnitude).
With all that said, I'm so grateful for Nextcloud since I don't have a
replacement, and I would prefer not having all our baby and vacation
pictures feeding some big corporation's AI. We really ought to have a
safe, private place to store files in 2025 that the average person can
wrap their head around. I only wish my family took better advantage of
it, since I'm essentially providing them with unlimited storage.
realaaa wrote 1 day ago:
is Immich that thing? I've played with it, but didn't really dig
deeper
they claim they can do it all when it comes to pictures and videos
etc
aeldidi wrote 19 hours 44 min ago:
I use my Nextcloud as a general file storage thing, I just
emphasized the photo aspect because thatâs my familyâs main use
case.
I have heard of Immich though, perhaps I owe it an honest try
someday.
nyadesu wrote 1 day ago:
Immich is actually usable, thumbnail previews work without any
previous setup, and the mobile app is pretty responsive
Unlike Nextcloud, I feel I can rely on it and feel I can upgrade
without issues
aeldidi wrote 19 hours 40 min ago:
That sounds really promising, maybe my family would be better
suited to something like that.
I will say though, Nextcloud is almost painless when it comes to
management. Iâve had one or two issues in the past, but their
âall in oneâ docker setup is pretty solid, I think. Itâs
what Iâve been using for the last year or so.
jacooper wrote 1 day ago:
Immich is way better if all you need is photo storage.
It's Google photos level.
nairboon wrote 1 day ago:
Microsoft Teams goes hold my beer and downloads more than 75 MB of
Javascript.
skeptrune wrote 1 day ago:
I know that this is supposed to be targeted at NextCloud in particular,
but I think it's a good standalone "you should care about how much
JavaScript you ship" post as well.
What frustrates me about modern web development is that everyone is
focused on making it work much more than they are making it sure it
works fast. Then when you go to push back, the response is always
something like "we need to not spend time over-optimizing."
Sent this straight to the team slack haha.
lurker_jMckQT99 wrote 1 day ago:
(tangential) Reading the comments, several mentioned "copyparty", never
heard of it before, haven't used it, haven't reviewed but does there
"feature showcase" video makes me want to give it a shot [1] :)
HTML [1]: https://www.youtube.com/watch?v=15_-hgsX2V0
gloosx wrote 1 day ago:
I was expecting the author to open the profiler tab instead of just
staring at network. But its yet another "heavy JavaScript bad" rant.
You really consider 1 MB of JS too heavy for an application with
hundreds of features? How exactly are developers supposed to fit an
entire web app into that? Why does this minimalism suddenly apply only
to JavaScript? Should every desktop app be under 1 MB too? Is Windows
Calculator 30 MB binary also an offense to your principles?
What year is it, 2002? Even low-band 5G gives you 30â250 Mbps down.
At those speeds, 20 MB of JS downloads in well under a second. So whats
the math beihnd the 5â10 second figure? What about the cache? Is it
turned off for you and you redownload the whole nextcloud from scratch
every time?
Nextcloud is undeniably slow, but the real reasons show up in the
profiler, not the network tab.
big-and-small wrote 1 day ago:
Such underrated comment. You can really have 500MB of dependencies
for your app because you're on MacOS and it's still gonna be fast
because memory use have nothing to do with performance.
Pretty much the same with JavaScript - modern engines are amazingly
fast or at least they really not depend on amount of raw javascript
feed to them.
celsoazevedo wrote 1 day ago:
> Even low-band 5G gives you 30â250 Mbps down.
On paper. In practice, it can be worse than that.
I've spent the past year using a network called O2 here in the UK.
Their 5G SA coverage depends a lot on low band (n28/700MHz) and had
issues in places where you'd expect it to work well (London, for
example). I've experienced sub 1Mbps speeds and even data failing
outdoors more than once. I have a good phone, I'm in a city, and
using what until a recent merger was the largest network in the
country.
I know it's not like this everywhere or all the time, but for those
working on sites, apps, etc, please don't assume good speeds are
available.
gloosx wrote 1 day ago:
That's really quite odd. There is even no 5G in my area, yet I get
100 Mbps stable download speed on 4G LTE, outdoors and indoors, any
time of the day. Is 5G a downgrade? Is it considered normal service
in the UK, when latest generation of cellular network provides a
connection speed compared to 3G launched in 2001? How is this even
acceptable in the year 2025. Would anyone in the UK start
complaining if they downgrade it to 100Kbps? Or should we design
the apps for that case?
celsoazevedo wrote 1 day ago:
5G is better, but like any G, networks need to deploy capacity
for it to be fast.
I sometimes see +1Gbps with 100MHz of n78 (3500MHz), a frequency
that wasn't used for any of the previous Gs, but as you are
aware, 5G can also be deployed on low band and while more
efficient, it can't do miracles. For example, networks here use
700MHz. A 10MHz slice of 700MHz seems to provide around 75Mbps on
4G and around 80Mbps on 5G under good conditions. It's better,
but not a huge improvement.
The problem in my case is a lack of capacity. Not all sites have
been upgraded to have faster backhaul or to broadcast the higher,
faster frequencies they use for 5G, so I may end up using low
band from a site further away... Low frequencies = less capacity
to carry data. Have too many users using something with limited
capacity and sometimes it will be too slow or not work at all.
It's usually the network's fault as they're not
upgrading/expanding/investing enough or fast enough... sometimes
it's the local authority being difficult and blocking
upgrades/new sites (and we also have the "5G = deadly waves"
crowd here).
It shouldn't happen, but it does happen[0], and that's we
shouldn't assume that a user - even in a developed country - will
have signal or good speeds everywhere. Every network has weak
spots, coverage inside buildings depends a lot on the materials
used, large events can cause networks to slow down, etc. Other
than trying to pick a better network, there's not much a user can
do.
The less data we use to do something, the better it is for users.
---
[0] Here's a 2022 article from BBC's technology editor
complaining about her speeds:
HTML [1]: https://www.bbc.co.uk/news/technology-63798292
j1elo wrote 1 day ago:
> low-band 5G gives you 30â250
First and foremost, I agree with the meat of your comment.
But I wanted to point about your comment, that it DOES very much
matter that apps meant to be transmitted over a remote connection
are, indeed, as slim as possible.
You must be thinking about 5G on a city with good infrastructure,
right?
I'm right now having a coffee on a road trip, with a 4G connection,
and just loading this HN page took like 8~10 seconds. Imagine a bulky
and bloated web app if I needed to quickly check a copy of my ID
stored in NextCloud.
It's time we normalize testing network-bounded apps through
low-bandwidth, high-latency network simulators.
znpy wrote 1 day ago:
> You really consider 1 MB of JS too heavy for an application with
hundreds of features? How exactly are developers supposed to fit an
entire web app into that? Why does this minimalism suddenly apply
only to JavaScript? Should every desktop app be under 1 MB too? Is
Windows Calculator 30 MB binary also an offense to your principles?
Yes, I don't know, because it runs in the browser, yes, yes.
elAhmo wrote 1 day ago:
One thing that could help with this is to use CDN for these static
assets, while still having the Nextcloud hosted on your own.
We had a similar situation with some notebooks running in production,
which were quite slow to load because it was loading a lot of JS files
/ WASM for the purposes of showing the UI. This was not part of our
core logic, and using a CDN to load these, but still relying on private
prod instance for business logic helped significantly.
I have a feeling this would be helpful here as well.
atoav wrote 1 day ago:
As someone who has hosted a few Nextcloud instances for a few years:
Nextcloud can be quick if you make it work. If you want to get a good
feel for how it can be rent a Hetzner storage box (1TB for below 5
Euros a month).
You sadly can't just install nextcloud on your vanilla server and
expect it to perform well.
maples37 wrote 1 day ago:
Do you have any tips and tricks to share? I'm running a self-hosted
instance on an old desktop PC in my basement for me and a couple
family members. Performance is kinda meh, and I don't think it's due
to resource constraints on the server itself. This is after
following the performance recommendations in the admin console to
tweak php.ini settings.
atoav wrote 19 hours 13 min ago:
Thie was a few years ago so I can't say I exsctly remember, but
thinking about PHP performance is certainly one of the good routes
to think about.
jimangel2001 wrote 1 day ago:
Nextcloud is a mess. It tries to do everything. The only reason I keep
it in production is because it's a hustle to transition my files and
DAVx info elsewhere.
The http upload is miserable, it's slow, it fails with no message, it
fails to start, it hangs. When uploading duplicate files the popup is
confusing. The UI is slow, the addons break on every update. The
gallery is very bad, now we use immich.
dengolius wrote 1 day ago:
Maybe it because of using PHP?
rafark wrote 1 day ago:
Nope. Php is sufficiently fast.
macinjosh wrote 1 day ago:
Javascript making PHP look bad.
estimator7292 wrote 1 day ago:
Like most of us I think, I really, really wanted to like nextcloud. I
put it on an admittedly somewhat slow dual Xeon server, gave it all 32
threads and many, many gigabytes of ram.
Even on a modern browser on a brand new leading-edge computer, it was
completely unusably slow.
Horrendous optimization aside, NC is also chasing the current fad of
stripping out useful features and replacing them with oceans of
padding. The stock photos app doesn't even have the ability to sort by
date!. That's been table stakes for a photo viewer since the 20th
goddamn century.
When Windows Explorer offers a more performant and featureful
experience, you've fucked up real bad.
I would feel incredibly bad and ashamed to publish software in the
condition that NextCloud is in. It is IMO completely unacceptable.
jw_cook wrote 1 day ago:
The article mentions Vikunja as an alternative to Nextcloud Tasks, and
I can give it a solid recommendation as well. I wanted a self-hosted
task management app with some lightweight features for organizing tasks
into projects, ideally with a kanban view, but without a full-blown PM
feature set. I tried just about every task management app out there,
and Vikunja was the only one that ticked all the boxes for me.
Some specific things I like about it:
* Basic todo app features are compatible with CalDAV clients like
tasks.org
* Several ways of organizing tasks: subtasks, tags, projects,
subprojects, and custom filters
* list, table, and kanban views
* A reasonably clean and performant frontend that isn't cluttered
with stuff I don't need (i.e., not Jira)
And some other things that weren't hard requirements, but have been
useful for me:
* A REST API, which I use to export task summaries and comments to
markdown files (to make them searchable along with my other plaintext
notes)
* A 3rd party CLI tool: https://gitlab.com/ce72/vja
* OIDC integration (currently using it with Keycloak)
* Easily deployable with docker compose
mxuribe wrote 1 day ago:
I know this post is more about nextcloud...but can i just say this
one feature from Vikunja "...export task summaries and comments..."
sounds great!!! One of the features i seek out when i look for a
task, project management software is the ability to easily and
comprehensivelt provide for nice exports, and that said exports
*include comments*!!
Either apps lack such an export, or its very minimal, or it includes
lots of things, except comments...Sometimes an app might have a REST
api, and I'd need to build something non-trivial to start pulling out
the comments, etc. I feel like its silly in this day and age.
My desire for comments to be included in exports is for local
search...but also because i use comments for sort of thinking aloud,
sort of like an inline task journaling...and when comments are
lacking, it sucks!
In fact, when i hear folks suggest to simply stop using such apps and
merely embrace the text file todo approach, they cite their having
full access to comments as a feature...and, i can't dispute their
claim! But barely any non-text-based apps highlight the inclusion of
comments. So, i have to ask: is it just me (who doesn't use a
text-based todo workflow), and then all other folks who *do use* a
text-based tdo flow, who actually care about access to comments!?!
jw_cook wrote 1 day ago:
Yeah, I hear you. I almost started using a purely text-based todo
workflow for those same reasons, but it was hard to give up some
web UI features, like easily switching between list and
kanban-style views.
My use case looks roughly like this: for a given project (as in
hobby/DIY/learning, not professional work), I typically have
general planning/reference notes in a markdown file synced across
my devices via Nextcloud. Separately, for some individual tasks I
might have comments about the initial problem, stuff I researched
along the way, and the solution I ended up with. Or just thinking
out loud, like you mentioned. Sometimes I'll take the effort to
edit that info into my main project doc, but for the way I think,
it's sometimes more convenient for me to have that kind of info
associated with a specific task. When referring to it later,
though, it's really handy to be able to use ripgrep (or other
search tools) to search everything at once.
To clarify, though, Vikunja doesn't have a built-in feature that
exports all task info including comments, just a REST API. It did
take a little work to pull all that info together using multiple
endpoints (in this case: projects, tasks, views, comments, labels).
Here's a small tool I made for that, although it's fairly specific
to my own workflow:
HTML [1]: https://github.com/JWCook/scripts/tree/main/vikunja-export
mxuribe wrote 1 day ago:
> Yeah, I hear you. I almost started using a purely text-based
todo workflow for those same reasons, but it was hard to give up
some web UI features, like easily switching between list and
kanban-style views.
Yeah, i like me some kanban! Which is one reason i've resisted
the text-based workflow...so far. ;-)
> ...Vikunja doesn't have a built-in feature that exports all
task info including comments, just a REST API. It did take a
little work...
Aww, man, then i guess i misread. I thought it was sort of easier
than that. Well, i guess that's not all bad. Its possible, but
simply requires a little elbow grease. I used to use Trello which
does include comments in their JSON export, but i had my own
little python app to copy out and filter only the key things i
wanted - like comments - and reformated to other text formats
like CSV, etc. But, Trello is not open source, so its not an
option for me anymore. Well, thanks for sharing (and for making!)
your vikunja export tool! :-)
PunchyHamster wrote 1 day ago:
It is slow and code seems to be messy enough to be fragile. It's also
in PHP that doesn't help performance.
rpgbr wrote 1 day ago:
I wonder how does bewCloud[1] stack up against NextCloud, since it's
meant to be a âmodern and simpler alternativeâ to it. Has anyone
tested it?
HTML [1]: https://bewcloud.com/
ndom91 wrote 1 day ago:
Many have brought up more websockets instead of REST API calls. It
looks like they're already working in that direction.. scroll down to
"Developer tools and APIs":
HTML [1]: https://nextcloud.com/blog/nextcloud-hub25-autumn/
ndom91 wrote 1 day ago:
This post completely misses the point. Linear downloads ~6.1mb of JS
over the network, decompressed to ~31mb and still feels snappy.
Applications like linear and nextcloud aren't designed to be opened and
closed constantly. You open them once and then work in that tab for the
remainder of your session.
As others have pointed out in this thread, "feeling slow" is mostly due
to the number of fetch requests and the backend serving those requests.
exabrial wrote 1 day ago:
>For context, I consider 1 MB of Javascript to be on the heavy side for
a web page/app.
I feel like > 2kb of Javascript is heavy. Literally not needed.
dmit wrote 1 day ago:
Preact have been fairly faithful to being <10k (compressed)! (even
though they haven't updated the original <3k claim since forever)
tracker1 wrote 1 day ago:
While I tend to agree... I've been on enough relatively modern web
apps that can hit 8mb pretty easily, usually because bundling and
tree shaking are broken. You can save a lot by being judicious.
IMO, the worst offenders are when you bring in charting/graphing
libraries into things when either you don't really need them, or
otherwise not lazy loading where/when needed. If you're using
something like React, then a little reading on SVG can do wonders
without bloating an application. I've ripped multi-mb graphing
libraries out to replace them with a couple components dynamically
generating SVG for simple charting or overlays.
zeppelin101 wrote 1 day ago:
The major shortcoming of NextCloud, in my opinion, is that that it's
not able to do sync over LAN. Imagine wanting to synchronize 1TB+ of
data and not being able to do so over a 1 Gbps+ local connection, when
another local device has all the necessary data. There is some
workaround involving "split DNS", but I haven't gotten around to it.
Other than that, I thought NC was absolutely fantastic.
tfvlrue wrote 23 hours 48 min ago:
> it's not able to do sync over LAN
I'm curious what you mean by this. I've never had trouble syncing
files with the Nextcloud client, inside or outside of my LAN. I
didn't do anything special to make it work internally. It's
definitely not the fastest thing ever, but it works pretty seamlessly
in my experience.
DrammBA wrote 1 day ago:
> The major shortcoming of NextCloud, in my opinion, is that that
it's not able to do sync over LAN.
Thatâs an interesting way to describe a lack of configuration on
your part.
Imagine me saying: "The major shortcoming of Google drive, in my
opinion, is that that it's not able to sync files from my phone.
There is some workaround involving an app called 'Google drive' that
I have to install on my phone, but I haven't gotten around to it.
Other than that, Google drive is absolutely fantastic.
zeppelin101 wrote 1 day ago:
I don't know why the sarcasm is so necessary. I very much enjoyed
Nextcloud and I proudly ran it for the better part of a year. I
even ran various NC-ecosystem apps, such as the Office ones.
However, my objective was to try it out from the standpoint of
regular self-hosting. I wanted to contrast the 'out-of-the-box'
experience to Dropbox, which I had been using for many years up to
that point. Yes, one was centrally hosted, while the other was
self-hosted, but still, that was the experiment I was running. So
I'm sorry if I didn't live up to your standards of what a user
should be doing to their software, but I sure had lots of fun
self-hosting tons of software at that time.
DrammBA wrote 1 day ago:
Not sure why you took it so personally, I was simply pointing out
that if you don't configure a feature then that feature would
obviously not work, for example phone sync for google drive won't
work if you don't download the google drive app, and lan access
for nextcloud won't work if you don't set up lan access.
immibis wrote 16 hours 52 min ago:
Except your phone comes with Google Drive and syncs things you
don't want it to, so Google can scan your life better.
DrammBA wrote 11 hours 17 min ago:
Last time I checked my iPhone didn't come with Google drive
Jaxan wrote 1 day ago:
I use it on LAN without a problem (using mDNS). Sure it runs with
self signed certificates, but thatâs ok with me.
redrblackr wrote 1 day ago:
Or just use ipv6!
You could also upload directly to the filesystem and then run occ
files:scan, or if the storage is mounted as external it just works.
Another method is to set your machines /etc/hosts (or equivalent) to
the local IP of the instance (if the device is only on lan you can
keep it, otherwise remove it after the large transfer).
Now your rounter should not send traffic to itself away, just loop it
internally so it never has to go over your isps connection - so
running over lan only helps if your switch is faster than your
router..
zeppelin101 wrote 1 day ago:
Good to know!
jw_cook wrote 1 day ago:
Check if your router has an option to add custom DNS entries. If
you're using OpenWRT, for example, it's already running dnsmasq,
which can do split DNS relatively easily: [1] If not, and you don't
want to set up dnsmasq just for Nextcloud over LAN, then DNS-based
adblock software like AdGuard Home would be a good option (as in, it
would give you more benefit for the amount of time/effort required).
With AdGuard, you just add a line under Filters -> DNS rewrites.
PiHole can do this as well (it's been awhile since I've used it, but
I believe there's a Local DNS settings page).
Otherwise, if you only have a small handful of devices, you could add
an entry to /etc/hosts (or equivalent) on each device. Not pretty,
but it works.
HTML [1]: https://blog.entek.org.uk/notes/2021/01/05/split-dns-with-dn...
zeppelin101 wrote 1 day ago:
That's a good tip. I had my local self-hosting phase during covid,
but if I ever come back to it, I'll try this.
accrual wrote 1 day ago:
I had a similar issue with a public game server that required
connecting through the WAN even if clients were local on the LAN. I
considered split DNS (resolving the name differently depending on the
source) but it was complicated for my setup. Instead I found a
one-line solution on my OpenBSD router:
pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to
192.168.1.10
It basically says "pass packets from the LAN interface towards the
WAN (egress) on the game port and redirect the traffic to the local
game server". The local client doesn't know anything happened, it
just worked.
s_ting765 wrote 1 day ago:
Nextcloud server is written in PHP. Of course it is slow. It's also
designed to be used as an office productivity suite meaning a lot of
features you may not actually use are enabled by default and those
services come with their own cronjobs and so on.
m-a-r-c-e-l wrote 1 day ago:
PHP is super-fast today. I've built 2 customer facing web products
with PHP which made each a million dollar business. And they were
very fast!
HTML [1]: https://dev.to/dehemi_fabio/why-php-is-still-worth-learning-...
s_ting765 wrote 1 day ago:
At the risk of sounding out the obvious. PHP is limited to single
threaded processes and has garbage collection. It's certainly not
the fastest language one could use for handling multiple concurrent
jobs.
m-a-r-c-e-l wrote 1 day ago:
That's incorrect. PHP has concurrency included.
On the other hand, in 99.99% of web applications you do not need
self baked concurrency. Instead use a queue system which handles
this. I've used this with 20 million background jobs per day
without hassles, it scales very well horizontally und vertically.
rafark wrote 1 day ago:
They didnât say it was the fastest. Just that the language per
se is fast enough.
s_ting765 wrote 1 day ago:
> the language per se is fast enough
I literally explained why this is not the case.
And Nextcloud being slow in general is not a new complaint from
users.
kirito1337 wrote 1 day ago:
I don't think I will ever use something like that. I work in over 10
PCs everyday and my only synchronisation is a 16 GB USB stick. I keep
all important work, apps and files there.
madeofpalk wrote 1 day ago:
I don't think this article actually does a great job of explaining why
Nextcloud feels slow. It shows lots of big numbers for MBs of
Javascript being downloading, but how does that actually impact the
user experience? Is the "slow" Nextcloud just sitting around waiting
for these JS assets to load and parse?
From my experience, this doesn't meaningfully impact performance.
Performance problems come from "accidentally quadratic" logic in the
frontend, poorly optimised UI updates, and too many API calls.
hamburglar wrote 1 day ago:
It downloads a lot of JavaScript, it decompresses a lot of
JavaScript, it parses a lot of JavaScript, it runs a lot of
JavaScript, it creates a gazillion onFoundMyNavel event callbacks
which all run JavaScript, it does all manner of uncontrolled
DOM-touching while its millions of script fragments do their thing,
it xhrâs in response to xhrs in response to DOM content ready
events, it throws and swallows untold exceptions, has several dozen
slightly unoptimized (but not too terrible) page traversals, ⦠the
list goes on and on. The point is this all adds up, and having 15MB
of code gives a LOT of opportunity for all this to happen. I used to
work on a large site where we would break out the stopwatch and
paring knife if the homepage got to more than 200KB of code, because
it meant we were getting sloppy.
nikanj wrote 1 day ago:
But at least theyâre not prematurely optimizing
bob1029 wrote 1 day ago:
15+ megabytes of executable code begins to look quite insane when
you start to take a gander at many AAA games. You can produce a
non-trivial Unity WebGL build that fits in <10 megabytes.
72deluxe wrote 1 day ago:
Yes and Windows 3.11 came on 6 1.44MB floppy disks. Modern
software is so offensive.
hamburglar wrote 1 day ago:
Windows 3.11 also wasnât shipped to you over a cellular
connection when you clicked on it. If it were, 6x1.44MB would
have been considered quite unacceptable.
hamburglar wrote 1 day ago:
Itâs the kind of code size where you analyze it and find 13
different versions of jquery and a hundred different bespoke
console.log wrappers.
shermantanktop wrote 1 day ago:
Agreed. Plus if it truly downloads all of that every time, something
has gone wrong with caching.
Overeager warming/precomputation of resources on page load (rather
than on use) can be a culprit as well.
hamburglar wrote 1 day ago:
Relying on cache to cover up a 15MB JavaScript load is a serious
crutch.
shermantanktop wrote 1 day ago:
Oh totally, but - normal caching behavior would lead to different
results than reported in the article. It would impact cold-start
scenarios, not every page load. So something else is up.
xingped wrote 1 day ago:
I gave up on using Nextcloud because every time it updated it
accumulated more and more errors and there was no way I was going to
use a software that I had to troubleshoot every single update. Also the
defaults for pictures are apparently quite stupid and so instead of
making and showing tiny thumbnails for pictures, the thumbnails are
unnecessarily large and loading the thumbnails for a folder of pictures
takes forever. You can fix this and tell it to make smaller thumbnails
apparently, but again, why am I having to fix everything myself? These
should be sane defaults. Unfortunately, I just can't trust Nextcloud.
estimator7292 wrote 1 day ago:
My NextCloud server completely borked itself with an automatic update
sometime in the last ~10 months. It's completely unresponsive.
I haven't bothered to fix it.
paularmstrong wrote 1 day ago:
I gave up updating Nextcloud. It works for what I use it for and I
don't feel like I'm missing anything. I'd rather not spend 4+ hours
updating and fixing confusing issues without any tangible benefit.
aborsy wrote 1 day ago:
A good thing thing about Nextcloud is that by learning one tool, you
get a full suite of collaboration apps: sync, file sharing, calendar,
notes, collectives, office (via Collabora or OnlyOffice), and more.
These features are pretty good, plus, you get things like photo
management and Talk, which are decent.
Sure, some people might argue that there are specialized tools for each
of these functions. And thatâs true. But the tradeoff is that you'd
need to manage a lot more with individual services. With Nextcloud, you
get a unified platform that might be good enough to run a company, even
if itâs not very fast and some features might have bugs.
The AIO has addressed issues like update management and reliability, it
been very good in my experience. You get a fully tested, ready-to-go
package from Nextcloud.
That said, I wonder, if the platform were rewritten in a more
performance-efficient language than PHP, with a simplified codebase and
trimmed-down features, would it run faster? The UI could also be more
polished (see Synology DSM web interface). The interface in Synology
looks really nice!
s1mplicissimus wrote 1 day ago:
rewriting in a lower-level language won't do too much for NC, because
it's mostly slow due to inefficient IO organization - things like
mountains of XHRs, inefficient fetching, db querying etc. - None of
that will be implicitly fixed by a rewrite in any language and can be
fixed in the PHP stack as well.
I think one of the reasons that helped OC/NC get off the ground was
precisely that the sysadmins running it can often do a little PHP,
which is just enough to get it customized for the client. Raising the
bar for contribution by using lower level languages might not be a
desirable change of direction in that case.
troyvit wrote 1 day ago:
The thing I don't get is that based on the article the front-end is
as bloated as the back-end.
That said there's an Owncloud version called Infinite Scale which is
written in Go.[1] Honestly I tried to go that route but it's
requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and
lots of docker containers littering your system) but it looks like
it's getting a lot of development.
HTML [1]: https://doc.owncloud.com/
preya2k wrote 1 day ago:
Most of the OCIS team left to start OpenCloud, which is a OCIS
fork. And it's hardware requirements are pretty tame. It's a very
nice replacement for Nextcloud, if you don't need the Groupware
features/Apps and are only looking for File sharing.
troyvit wrote 8 hours 8 min ago:
Holy cow this looks awesome. I'm digging in now.
c-hendricks wrote 1 day ago:
> it's requirements are pretty opinionated (Ubuntu LTS 22.04 or
24.04
Hm?
> This guide describes an installation of Infinite Scale based on
Ubuntu LTS and docker compose. The underlying hardware of the
server can be anything as listed below as long it meets the OS
requirements defined in the Software Stack [1] The Software Stack
section goes on to say it's just needs Docker, Docker Compose,
shell access, and sudo.
Ubuntu and sudo are probably only mentioned because the guide walks
you through installing docker and docker compose.
HTML [1]: https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-co...
hedora wrote 1 day ago:
If the developers can only get it to run in a pile of ubuntu
containers, then it's extremely likely they haven't thought
through basic things you need to operate a service, like supply
chain security, deterministic builds, unit testing, upgrades,
etc.
cloudfudge wrote 1 day ago:
I see 6 officially supported linux distributions. I don't know
where anyone got the idea that they can only get it to run on
ubuntu. It's containerized. Who cares what the host os is,
beyond "it can run containers"?
troyvit wrote 8 hours 11 min ago:
Here's where I got it from: [1] And I wish it was
"containerized" but really it's "dockerized" as this thread
demonstrates: [2] So yeah like I said in my original comment,
for personal use it's just not right for me (because I choose
not to use docker in my personal projects), but I hope it's
right for other people because it looks like a killer app.
I'd definitely like to see what other options are available
on other distros so I'll dig through their documentation
more.
HTML [1]: https://doc.owncloud.com/ocis/next/depl-examples/ubu...
HTML [2]: https://central.owncloud.org/t/owncloud-docker-image...
TheAngush wrote 2 hours 8 min ago:
Your second link appears to be about OwnCloud, not OwnCloud
Infinite Scale.
cloudfudge wrote 5 hours 58 min ago:
I think what you're looking at is: "Here's an example of
installing this on ubuntu 24.04. These instructions will
also work on 22.04." This is in no way saying they can
only get it to work on ubuntu; they just haven't written a
step-by-step example like this for other distributions.
And yeah, trying to use podman with something that's based
on docker compose is ... probably gonna give you some
headaches, I'd guess. I don't particularly know the
pitfalls but if you're expecting it to be transparently
swappable, I don't think that's an owncloud issue.
cbondurant wrote 1 day ago:
I've used nextcloud for close to I think 8 years now as a replacement
for google drive.
However my need for something like google drive has reduced massively,
and nextcloud continues to be a massive maintenance pain due to its
frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a
new release and migration every four months! Why aren't there any LTS
branches? The amount of admin work that nextcloud requires only makes
sense for when you legitimately have a whole group of people with
accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that
actually fits my current use-case. (I just need to sync my fuckin
keepass vault to my phone, man.) Syncthing looks promising with
significantly less hassle...
caspar wrote 13 hours 40 min ago:
Been using syncthing with keepass(X/XC) for probably half a decade
now and it works great, especially since KeepassXC has a great
built-in merge feature for the rare cases that you get conflicts from
modifying your vault on different clients before they sync.
The only major point of friction with syncthing is that you should
designate one almost-always-on device as "introducer" for every
single one of your devices, so that it will tell all your devices
whenever it learns about a new device. Otherwise whenever you gain a
device (or reinstall etc) then you have to go to N devices to add
your new device there.
Oh, and you can't use syncthing to replicate things between two dirs
on the same computer - which isn't a big deal for the keepass usecase
and arguably is more of a rsync+cron task anyway but good to be aware
of.
xandrius wrote 1 day ago:
Been running NC on my home server and basically maybe update it once
a year or so? Even less probably, so definitely not a must to update
every time. Plus via snap it's pretty simple.
jw_cook wrote 1 day ago:
The linuxserver.io image for Nextcloud requires considerably less
babysitting for upgrades: [1] As long as you only upgrade one major
version at a time, it doesn't require putting the server in
maintenance mode or using the occ cli.
HTML [1]: https://docs.linuxserver.io/images/docker-nextcloud
tracker1 wrote 1 day ago:
Might also consider Vaultwarden/Bitwarden as a self-host alternative.
Yeah it's client-server... that said, been pretty happy as a user.
catapart wrote 1 day ago:
Just like any other modern app: first you make it work using
frameworks. Then, as soon as the "Core" product is done - just a few
more features - then we'll circle back around to ripping out those
bloated frameworks for something more lithe. Shouldn't be more than two
weeks, now. Most of the base stuff is done. Just another feature or
two. I mean, a little longer, if we have some issues with those
features, sure. But we'll get back around to a simpler UI right after!
Just those features, their bugs and support, and then - well
documentation. Just the minimum stuff. Enough to know what we did when
we come back to it. But we'll whip up those docs and then it's right on
to slimming down the frontend! Won't be long now...
bogwog wrote 1 day ago:
Nextcloud is bloated and slow, but it works and is reliable. I've been
running a small instance in a business setting with around 8 daily
users for many years. It is rock solid and requires zero maintenance.
But people rarely use the web apps. Instead, it's used more like a NAS
with the desktop sync client being the primary interface. Nobody likes
the web apps because they're slow. The Windows desktop sync client has
a really annoying update process, but other than that is excellent.
I could replace it with a traditional NAS, but the main feature keeping
me there is an IMAP authentication plugin. This allows users to sign in
with their business email/password. It works so well and makes it so
much easier to manage user accounts, revoke access, do password resets,
etc.
imcritic wrote 1 day ago:
> Nobody likes the web apps because they're slow.
Web apps don't have to be slow. I prefer web apps over system apps,
as I don't have to install extra programs into my system and I have
more control over those apps:
- a service decides it's a good idea to load some tracking stuff from
3rd-party? I just uMatrix block it;
- a page has an unwanted element? I just uBlock block it;
- a page could have a better look? I just userstyle style it;
- a page is missing something that could be added on client side? I
just userscript script it
Jaxan wrote 1 day ago:
Do you also prefer a web-based file browser? My main use for
Nextcloud is files and a desktop sync is crucial and integrates
with the OS.
bfkwlfkjf wrote 1 day ago:
I've never used nextcloud, but I always imagined that the point is you
can run services but then plug in any calendar app etc. You don't have
to be running nextclouds calendar, I thought. Did I misundestand how it
works?
glenstein wrote 1 day ago:
If dav works best for you, you're using it right.
I would assume that the people for whom a slow web based calendar is
a problem (among other slow things on the web interface) are people
who want to be using it if it performed well.
They wouldn't just make a bad slow web interface on purpose to
enlighten people as to how bad web interfaces are, as a complicated
way of pushing them toward integrated apps.
imcritic wrote 1 day ago:
Their calendar plugin provides CalDAV, so you could just use your
local calendar app that syncs with the server over that protocol.
bfkwlfkjf wrote 1 day ago:
Sooooo why not just host any caldav server instead? Like, why is
nextcloud so popular compared to self hosting caldav?
maples37 wrote 1 day ago:
In my case, I want file/photo syncing, calendar syncing, and
contact syncing.
Nextcloud provides all 3 in a package that pretty much just
works, in my experience (despite being kinda slow).
The Notes app is a pretty nice wrapper around a specific folder
full of markdown files, I mostly use it on my phone, and on my
desktop I just use my favorite editor to poke at the .md files
directly.
Oh, and when a friend group wanted a better way to figure out
which day to get together, I just installed the Polls app with a
few clicks and we use that now.
I am a bit disappointed in the performance, but I've been running
this setup for years and it "just works" for me. I understand
how it works, I know how to back it up (and, more importantly
restore from that backup!)
If there's another open-source, self-hosted project that has
WebDAV, CalDAV, and CardDAV all in one package, then I might
consider switching, but for now Nextcloud is "good enough" for
me.
bfkwlfkjf wrote 1 day ago:
Ok so it's just the convenience of being a package, thank you
for explaining.
dingdingdang wrote 1 day ago:
Having at some point maintained a soft fork / patch-set for Nextcloud..
yes, there is so much performance left on the table. With a few basic
patches the file manager, for example, sped up by magnitudes in terms
of render speed.
The issue remains that the core itself feels like layers upon layers of
encrusted code that instead of being fixed have just had another layer
added ... "something fundamental wrong? Just add Redis as a dependency.
Does it help? Unsure. Let's add something else. Don't like having the
config in a db? Let's move some of it to ini files (or vice
versa)..etc..etc." it feels like that's the cycle and it ain't pretty
and I don't trust the result at all. Eventually abandoned the project.
Edit: at some point I reckon some part of the ecosystem recognised some
of these issues and hence Owncloud remade a large part of the
fundamentals in Golang. It remains unknown to me whether this sorted
things or not. All of these projects feel like they suffer badly from
"overbuild".
Edit-edit: another layer to add to the mix is that the "overbuild"
situation is probably largely what allows the hosting economy around
these open source solutions to thrive since Nextcloud and co. are so
over-engineered and badly documented that they -require- a dedicated
sys-admin team to run well.
redrblackr wrote 1 day ago:
Two things:
1. Did you open back port request with these basic patches? If you
have orders of magnitude speed improvements it would be aswesome to
share!
2. You definitively don't need an entire sysadmin team to run
nextcloud, in my work (large organisation) there's three instances
running (for different parts/purposes of which only one is run by
more than one person, and I run myself both my personal instance and
for a nonprofit with ~100 persons, it's really not much work after
setup (and other systems are plenty of a lot more complicated systems
to set up, trust me)
dingdingdang wrote 1 day ago:
1. There was no point, having thought about it a bit; a lot of the
patches (in essence it was at most a handful) revolved around
disabling features which in turn could never have been upstreamed.
An example was, as mentioned elsewhere in this comment section, the
abysmal performance of the thumbnail gen feature, it never cached
right, it never worked right and even when it did it would
absolutely kill listings of larger folders of media - this was
basically hacked out and partially replaced with much simpler gen
on images alone, suddenly the file manager worked again for
clients.
2. Guess that's debatable, or maybe even skill dependent (mea
culpa), and also largely a question of how comfortable one is with
systems that cannot be reasoned about cleanly (similar to TFA I
just could not stand the bloat, it made me feel more than mildly
unwell working with it). Eventually it was GDPR reqs that drove us
towards the big G across multiple domains.
On another note it strikes me how the attempts at re-gen'ing folder
listings online really is Sisyphus work, there should be a clean
way to enfold multiuser/access-tokens into the filesystems of
phones/PCs/etc. The closest pseudo example at the moment I guess is
classic Google Drive but of course it would need gating and
security on the OS side of things that works to a standard across
multiple ecosystems (Apple, MS, Android, iPhone, Linux etc.) ...
yeeeeah, better keep polishing that HTML ball of spaghetti I guess
;)
INTPenis wrote 1 day ago:
This is my theory as well. NC has grown gradually in silos almost,
every piece of it is some plugin they've imported from contributions
at some point.
For example the reason there's no cohesiveness with a common
websocket bus for all those ajax calls is because they all started
out as a separate plugin.
NC has gone full modularity and lost performance for it. What we need
is a more focused and cohesive tool for document sharing.
Honestly I think today with IaC and containers, a better approach for
selfhosting is to use many tools connected by SSO instead of one
monstrosity. The old Unix philosophy, do one thing but do it well.
eYrKEC2 wrote 22 hours 20 min ago:
Why do you need a common websocket bus when h2 interleaves all the
HTTP requests over the same SSL tunnel?
rahkiin wrote 1 day ago:
This still needs cohesive authorization and central file sharing
and access rules across apps.
And some central concept of projects to move all content away from
people and into the org and roles
PaulKeeble wrote 1 day ago:
I don't doubt that large amounts of javascript can often cause issues
but even when cached NextCloud feels sluggish. When I look at just the
network tab of a refresh of the calendar page it does 124 network
calls, 31 of which aren't cached. it seems to be making a call per
calendar each of which is over 30ms. So that stacks up the more
calendars you have(and you have a number by default like contact
birthdays).
The Javascript performance trace shows over 50% of the work is in
making the asynchronous calls to pull those calendars and other network
calls one by one and then on all the refresh updates it causes putting
them onto the page.
Supporting all these N calendar calls is pulls individually for
calendar rooms and calendar resources and "principles" for the user.
All separate individual network calls some of which must be gating the
later individual calendar calls.
Its not just that, it also makes a call for notifications, groups, user
status and multiple heartbeats to complete the page as well, all before
it tries to get the calendar details.
This is why I think it feels slow, its pulling down the page and then
the javascript is pulling down all the bits of data for everything on
the screen with individual calls, waiting for the responses before it
can progress in many ways to make the further calls of which there can
be N many depending on what the user is doing.
So across the local network (2.5Gbps) that is a second and most of it
in waiting for the network. If I use the regular 4G level of throttling
it takes 33.10 seconds! Really goes to show how bad this design does
with extra latency.
bityard wrote 1 day ago:
The thing that kills me is that Nextcloud had an _amazing_ calendar a
few years ago. It was way better than anything else I have used. (And
I tried a lot, even the calendar add-on for Thunderbird. Which may or
may not be built in these days, I can't keep track.)
Then at some point the Nextcloud calendar was "redesigned" and now
it's completely terrible. Aesthetically, it looks like it was
designed for toddlers. Functionally, adding and editing events is
flat out painful. Trying to specify a time range for an event is
weird and frustrating. It's better than not having a calendar, but
only just.
There are plenty of open source calendar _servers_, but no good open
source web-based calendars that I have been able to find.
jauntywundrkind wrote 1 day ago:
Sync Conf is next week, and this sort of issue is so part of what I
hope maybe can just go away. [1] Efforts like Electric SQL to have
APIs/protocols for bulk fetching all changes (to a "table") is where
it's at. [2] It's so rare for teams to do data loading well, rarer
still we get effective caching, and often a products footing here
only degrades with time. The various sync ideas out there offer such
an alluring potential, of having a consistent way to get the client
the updated live data they need, in a consistent fashion.
Side note, I'm also hoping the js / TC39 source phase imports
proposal aka import source can help let large apps like NextCloud
defer loading more of it's JS until needed too. But the waterfall you
call out here seems like the real bad side (of NextCloud's
architecture)!
HTML [1]: https://syncconf.dev/
HTML [2]: https://electric-sql.com/docs/api/http
HTML [3]: https://github.com/tc39/proposal-source-phase-imports
riskable wrote 1 day ago:
I was going to say... The size of the JS only matters the first time
you download it unless there's a lot of tiny files instead of a
bundle or two. What the article is complaining about doesn't seem
like it's root cause of the slowness.
When it comes to JS optimization in the browser there's usually a few
great big smoking guns:
1. Tons of tiny files: Bundle them! Big bundle > zillions of
lazy-loaded files.
2. Lots of AJAX requests: We have WebSockets for a reason!
3. Race conditions: Fix your bugs :shrug:
4. Too many JS-driven animations: Use CSS or JS that just
manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are
dependent on round-trip times (HTTP request to server -> HTTP
response to client) which are the biggest cause of slowness on mobile
networks (e.g. 5G).
Modern mobile network connections have plenty of bandwidth to deliver
great big files/streams but they're still super slow when it comes to
round-trip times. Knowing this, it makes perfect sense that
Nextcloud would be slow AF on mobile networks because it follows the
REST philosophy.
My controversial take: GIVE REST A REST already! WebSockets are
vastly superior and they've been around for FIFTEEN YEARS now. Do I
understand why they're so much lower latency than REST calls on
mobile networks? Not really: In theory, it's still a round-trip but
for some reason an open connection can pass data through an order of
magnitude (or more) lower latency on something like a 5G connection.
amluto wrote 16 hours 35 min ago:
Why WebSockets? If you need to fetch 30 things, you can build an
elaborate protocol to stream them in without them interfering with
each other, or you can ask for all thirty at once. Plain HTTP(S)
can do the latter just fine, although the API might not be quite
RESTful.
jadbox wrote 1 day ago:
How do you feel about SSE then?
Yokolos wrote 1 day ago:
I've never seen anybody recommend WebSockets instead of REST. I
take it this isn't a widely recommended solution? Do you mean
specifically for mobile clients only?
DecoPerson wrote 1 day ago:
WebSockets are the secret ingredient to amazing low- to
medium-user-count software. If you practice using them enough and
build a few abstractions over them, you can produce incredible
âliveâ features that REST-designs struggle with.
Having used WebSockets a lot, Iâve realised that itâs not the
simple fact that WebSockets are duplex or that itâs more
efficient than using HTTP long-polling or SSEs or something
else⦠No, the real benefit is that once you have a âsocketâ
object in your hands, and this object lives beyond the normal
ârequest->responseâ lifecycle, you realise that your users
DESERVE a persistent presence on your server.
You start letting your route handlers run longer, so that you can
send the result of an action, rather than telling the user to
ârefresh the pageâ with a 5-second refresh timer.
You start connecting events/pubsub messages to your users and
forwarding relevant updates over the socket you already hold.
(Trying to build a delta update system for polling is complicated
enough that the developers of most bespoke business software
Iâve seen do not go to the effort of building such thingsâ¦
But with WebSockets itâs easy, as you just subscribe before
starting the initial DB query and send all broadcasted updates
events for your set of objects on the fly.)
You start wanting to output the progress of a route handler to
the user as it happens (âFetching payroll detailsâ¦â,
âFetching timesheetsâ¦â, âCorrelating timesheets and clock
in/out dataâ¦â, âMaking paymentsâ¦â).
Suddenly, as a developer, you can get live debug log output IN
THE UI as it happens. This is amazing.
AND THEN YOU WANT TO CANCEL SOMETHING because you realise you
accidentally put in the actual payroll system API key. And that
gets you thinking⦠can I add a cancel button in the UI?
Yes, you can! Just make a âctx.progress()â method. When
called, if the user has cancelled the current RPC, then throw a
RPCCancelled error thatâs caught by the route handling system.
Thereâs an optional first argument for a progress message to
the end user. Maybe add a âno-cancelâ flag too for critical
sections.
And then you think about live collaboration for a bit⦠thatâs
a fun rabbit hole to dive down. I usually just do âthis is
locked for editingâ or check the per-document incrementing
version number and say âsomeone else edited this before you
started editing, your changes will be lost â please reloadâ.
Figma cracked live collaboration, but it was very difficult based
on what theyâve shared on their blog.
And then⦠one day⦠the big one hits⦠where you have a
multistep process and you want Y/N confirmation from the user or
some other kind of selection. The sockets are duplex! You can
send a message BACK to the RPC client, and have it handled by the
initiating code! You just need to make it so devs can add event
listeners on the RPC call handle on the client! Then, your
server-side route handler can just âawaitâ a response! No
need to break up the handler into multiple functions. No need to
pack state into the DB for resumability. Just await (and make
sure the Promise is rejected if the RPC is cancelled).
If you have a very complex UI page with live-updating pieces, and
you want parts of it to be filterable or searchable⦠This is
when you add ânested RPCsâ. And if the parent RPC is
cancelled (because the user closes that tab, or navigates away,
or such) then that RPC and all of its children RPCs are
cancelled. The server-side route handler is a function closure,
that holds a bunch of state that can be used by any of the
sub-RPC handlers (they can be added with âctx.addSubMethodâ
or such).
The end result is: while building out any feature of any
ânon-web-scaleâ app, you can easily add levels of polish that
are simply too annoying to obtain when stuck in a REST point of
view. Sure, itâs possible to do the same thing there, but
youâll get frustrated (and so development of such features will
not be prioritised). Also, perf-wise, REST is good for âweb
scaleâ / high-user-counts, but you will hit weird latency
issues if you try to use for live, duplex comms.
WebSockets (and soon HTTP3 transport API) are game-changing. I
highly recommend trying some of these things.
tyre wrote 1 day ago:
Find someone to love you the way DecoPerson loves websockets.
riskable wrote 1 day ago:
After all my years of web development, my rules are thus:
* If the browser has an optimal path for it, use HTTP (e.g.
images where it caches them automatically or file uploads where
you get a "free" progress API).
* If I know my end users will be behind some shitty firewall
that can't handle WebSockets (like we're still living in the
early 2010s), use HTTP.
* Requests will be rare (per client): Use HTTP.
* For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event
dispatcher for both the frontend and the backend to handle any
given request/response and it makes the code sooooo much simpler
than REST. Example:
WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds
the WebSocket connection and has `on()`, `off()`, and
`dispatch()` functions. When the server sends a message like
`{"type": "pong", "payload": ""}`, the client calls
`WSDispatcher.dispatch("pong", "")` which results in
`pongFunc("")` being called.
It makes reasoning about your API so simple and human-readable!
It's also highly performant and fully async. With a bit of
Promise wrapping, you can even make it behave like a synchronous
call in your code which keeps the logic nice and concise.
In my latest pet project (collaborative editor) I've got the
WebSocket API using a strict "call"/"call:ok" structure. Here's
an example from my WEBSOCKET_API.md:
### Create Resource
```javascript
// Create story
send('resources:create', {
resource_type: 'story',
title: 'My New Story',
content: '',
tags: {},
policy: {}
});
// Create chapter (child of story)
send('resources:create', {
resource_type: 'chapter',
parent_id: 'story_abc123', // This would actually be a UUID
title: 'Chapter 1'
});
// Response:
{
type: 'resources:create:ok', // <- Note the ":ok"
resource: { id: '...', resource_type: '...', ... }
}
```
I've got a `request()` helper that makes the async nature of the
WebSocket feel more like a synchronous call. Here's what that
looks like in action:
const wsPromise = getWsService(); // Returns the WebSocket
singleton
// Create resource (story, chapter, or file)
async function createResource(data: ResourcesCreateRequest) {
loading.value = true;
error.value = null;
try {
const ws = await wsPromise;
const response = await ws.request(
"resources:create",
data // <- The payload
);
// resources.value because it's a Vue 3 `ref()`:
resources.value.push(response.resource);
return response.resource;
} catch (err: any) {
error.value = err?.message || "Failed to create
resource";
throw err;
} finally {
loading.value = false;
}
}
For reference, errors are returned in a different, more verbose
format where "type" is "error" in the object that the `request()`
function knows how to deal with. It used to be ":err" instead of
":ok" but I made it different for a good reason I can't remember
right now (LOL).
Aside: There's still THREE firewalls that suck so bad they can't
handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web
Gateway.
fluoridation wrote 1 day ago:
>Do I understand why they're so much lower latency than REST calls
on mobile networks? Not really: In theory, it's still a round-trip
but for some reason an open connection can pass data through an
order of magnitude (or more) lower latency on something like a 5G
connection.
It's because a TLS handshake takes more than one roundtrip to
complete. Keeping the connection open means the handshake needs to
be done only once, instead of over and over again.
binary132 wrote 1 day ago:
doesnât HTTP keep connections open?
fluoridation wrote 1 day ago:
It's up to the client to do that. I'm merely explaining why
someone would see a latency improvement switching from HTTPS to
websockets. If there's no latency improvement then yes, the
client is keeping the connection alive between requests.
riskable wrote 1 day ago:
Yes and no: There's still a rather large latency improvement
even when you're using plain HTTP (not that you should go without
encryption).
I was very curious so I asked AI to explain why websockets would
have such lower latency than regular HTTP and it gave some
(uncited, but logical) reasons:
Once a WebSocket is open, each message avoids several sources of
delay that an HTTP request can hitâespecially on mobile. The
big wins are skipping connection setup and radio wakeups, not
shaving a few header bytes.
Why WebSocket âping/pongâ often beats HTTP GET /ping on
mobile
No connection setup on the hot path
HTTP (worst case): DNS + TCP 3âway handshake + TLS
handshake (HTTPS) before you can send the request. On mobile RTTs
(60â200+ ms), thatâs 1â3 extra RTTs, i.e., 100â500+ ms
just to get started.
HTTP with keepâalive/H2/H3: Better (no new TCP/TLS),
but pools can be empty or closed by OS/radios/idle timers, so you
still pay setup sometimes.
WebSocket: You pay the TCP+TLS+Upgrade once. After that,
a ping is just one round trip on an alreadyâopen connection.
Mobile radio state promotions
Cellular modems drop to lowâpower states when idle. A
fresh HTTP request can force an RRC âpromotionâ from idle to
connected, adding tens to hundreds of ms.
A longâlived WebSocket with periodic keepalives tends
to keep the radio in a faster state or makes promotion more
likely to already be done, so your message departs immediately.
Tradeâoff: keeping the radio âwarmâ costs battery;
most realtime apps tune keepalive intervals to balance latency vs
power.
Fewer app/stack layers per message
HTTP request path: request line + headers (often cookies,
auth), routing/middleware, logging, etc. Even with HTTP/2 header
compression, the server still parses and runs more machinery.
WebSocket after upgrade: tiny frame parsing
(clientâserver frames are 2âbyte header + 4âbyte mask +
payload), often handled in a lightweight event loop. Much less
perâmessage work.
No extra round trips from CORS preflight
A simple GET usually avoids preflight, but if you add
nonâsafelisted headers (e.g., Authorization) the browser will
first send an OPTIONS request. Thatâs an extra RTT before your
GET.
WebSocket doesnât use CORS preflights; the Upgrade
carries an Origin header that servers can validate.
Warm path effects
Persistent connections retain congestion window and
NAT/firewall state, reducing firstâpacket delays and occasional
SYN drops that new HTTP connections can encounter on mobile
networks.
What about encryption (HTTPS/WSS)?
Handshake cost: TLS adds 1â2 RTTs (TLS 1.3 is 1âRTT;
0âRTT is possible but niche). If you open and close HTTP
connections frequently, you keep paying this. A WebSocket pays it
once, then amortizes it over many messages.
After the connection is up, the perâmessage crypto cost is
small compared to network RTT; the latency advantage mainly comes
from avoiding repeated handshakes.
How much do headers/bytes matter?
For tiny messages, both HTTP and WS fit in one MTU. The few
hundred extra bytes of HTTP headers rarely change latency
meaningfully on mobile; the dominant factor is extra round trips
(connection setup, preflight) and radio state.
When the gap narrows
If your HTTP requests reuse an existing HTTP/2 or HTTP/3
connection, have no preflight, and the radio is already in a
connected state, a minimal GET /ping and a WS ping/pong both take
roughly one network RTT. In that best case, latencies can be
similar.
In real mobile conditions, the chances of hitting at least
one of the slow paths above are high, so WebSocket usually looks
faster and more consistent.
fluoridation wrote 1 day ago:
Wow. Talk about inefficiency. It just said the same thing I
did, but using twenty times as many characters.
>Yes and no: There's still a rather large latency improvement
even when you're using plain HTTP (not that you should go
without encryption).
Of course. An unencrypted HTTP request takes a single roundtrip
to complete. The client sends the request and receives the
response. The only additional cost is to set up the connection,
which is also saved when the connection is kept open with a
websocket.
cloudfudge wrote 1 day ago:
Yes and no. Have you considered that the problem is that a
TLS handshake takes more than one round trip to complete?
/s
fwlr wrote 1 day ago:
15MB of JavaScript is 15MB of code that your browser is trying to
execute. Itâs the same principle as âcompiling a million lines
of code takes a lot longer than compiling a thousand linesâ.
riskable wrote 1 day ago:
It's a lot more complicated than that. If I have a 15MB .js file
and it's just a collection of functions that get called on-demand
(later), that's going to have a very, very low overhead because
modern JS engines JIT compile on-the-fly (as functions get used)
with optimization happening for "hot" stuff (even later).
If there's 15MB of JS that gets run immediately after page load,
that's a different story. Especially if there's lots of nested
calls. Ever drill down deep into a series of function calls
inside the performance report for the JS on a web page? The more
layers of nesting you have, the greater the overhead.
DRY as a concept is great from a code readability standpoint but
it's not ideal performance when it comes to things like JS
execution (haha). I'm actually disappointed that modern bundlers
don't normally inline calls at the JS layer. IMHO, they rely too
much on the JIT to optimize hot call sites when that could've
been done by the bundler. Instead, bundlers tend to optimize for
file size which is becoming less and less of a concern as
bandwidth has far outpaced JS bundle sizes.
The entire JS ecosystem is a giant mess of "tiny package does one
thing well" that is dependent on n layers of "other tiny package
does one thing well." This results in LOADS of unnecessary
nesting when the "tiny package that does one thing well" could've
just written their own implementation of that simple thing it
relies on.
Don't think of it from the perspective of, "tree shaking is
supposed to take care of that." Think of it from the perspective
of, "tree shaking is only going to remove dead/duplicated code to
save file sizes." It's not going to take that 10-line function
that handles with and put that logic right where its used (in
order to shorten the call tree).
Joeri wrote 1 day ago:
That 15mb still needs to be parsed on every page load, even if
it runs in interpreted mode. And on low end devices thereâs
very little cache, so the working set is likely to be far
bigger than available cache, which causes performance to
crater.
riskable wrote 1 day ago:
Ah, that's the thing: "on page load". A one-time expense!
If you're using modern page routing, "loading a new URL"
isn't actually loading a new page... The client is just
simulating it via your router/framework by updating the page
URL and adding an entry to the history.
Also, 15MB of JS is nothing on modern "low end devices".
Even an old, $5 Raspberry Pi 2 won't flinch at that and
anything slower than that... isn't my problem! Haha =)
There comes a point where supporting 10yo devices isn't worth
it when what you're offering/"selling" is the latest &
greatest technology.
It shouldn't be, "this is why we can't have nice things!" It
should be, "this is why YOU can't have nice things!"
port11 wrote 13 hours 49 min ago:
This really is a very wrong take. My iPhone 11 isn't that
old but it struggles to render some websites that are
Chrome-optimised. Heck, even my M1 Air has a hard time
sometimes. It's almost 2026, we can certainly stop blaming
the client for our shitty webdevelopment practices.
fluoridation wrote 1 day ago:
>There comes a point where supporting 10yo devices isn't
worth it
Ten years isn't what it used to be in terms of hardware
performance. Hell, even back in 2015 you could probably
still make do with a computer from 2005 (although it might
have been on its last legs). If your software doesn't run
properly (or at all) on ten-year-old hardware, it's likely
people on five-year-old hardware, or with a lower budget,
are getting a pretty shitty experience.
I'll agree that resources are finite and there's a point
beyond which further optimizations are not worthwhile from
a business sense, but where that point lies should be
considered carefully, not picked arbitrarily and the
consequences casually handwaved with an "eh, not my
problem".
snovv_crash wrote 1 day ago:
When you write code with this mentality it makes my modern
CPU with 16 cores at 4HGz and 64GB of RAM feel like a
Pentium 3 running at 900MHz with 512MB of RAM.
Please don't.
binary132 wrote 1 day ago:
THANK YOU
8cvor6j844qw_d6 wrote 1 day ago:
Is Nextcloud reliable enough for "production" use?
Last time I heard a certain privacy community recommended against
Nextcloud due to some issues with Nextcloud E2EE.
yabones wrote 1 day ago:
Nextcloud, and before it Owncloud, have been "in production" in my
household for nearly a decade at this point. There have been some
botched updates and sync problems over the years, but it's been by
far the most reliable app I've hosted.
In terms of privacy & security, like everything it comes down to risk
model and the trade-offs you make to exist in the modern world.
Nextcloud is for sharing files, if nothing short of perfect E2EE is
tolerable it's probably not the solution for you, not to mention the
other 99.999% of services out there.
I think most of the problems people report come down to really bad
defaults that let it run like shit on very low-spec boxes that
shouldn't be supported (ie raspi gen 1/2 back in the day). Installing
redis and configuring php-fpm correctly fixes like 90% of the
problems, other than the bloated Javascript as mentioned in the op.
End of the day, it's fine. Not perfect, not ideal, but fine.
Yie1cho wrote 1 day ago:
the question is, what's your use case?
for me it's a family photo backup with calendars (private and shared
ones) running in a VM on the net.
its webui is rarely used by anyone (except me), everyone is using
their phones (calendars, files).
does it work? yes. does anyone other than me care about the bugs? no.
but noone really _uses_ it as if it was deployed for a small office
of 10-20-30 people. on the other hand, there are companies paying for
it.
for this,
imcritic wrote 1 day ago:
Kinda. In the long run you will definitely stumble upon a ton of
bugs, but they mostly have some workarounds. Mostly.
internet_points wrote 1 day ago:
syncthing otoh barely even has a web ui, so it's really fast :-P
accrual wrote 1 day ago:
Syncthing has been very "set it and forget it" for me. It updates
itself occasionally but I haven't had to fix anything yet.
imcritic wrote 1 day ago:
It felt unnecessarily complex for such a simple task as file
synchronization. I prefer unison. Unfortunately, it is a blast from
the past written in ocaml and there is no Android app :-(
tripplyons wrote 1 day ago:
I once discovered and reported a vulnerability in Nextcloud's web
client that was due to them including an outdated version of a
JavaScript-based PDF viewer. I always wondered why they couldn't just
use the browser's PDF viewer. I made $100, which was a large amount to
me as a 16 year old at the time.
Here is a blog post I wrote at the time about the vulnerability
(CVE-2020-8155):
HTML [1]: https://tripplyons.com/blog/nextcloud-bug-bounty
rahkiin wrote 1 day ago:
I recently needed to show a pdf file inside a div in my app. All i
wanted was to show it and make it scrollable. The file comes from a
fetch() with authorzation headers.
I could not find a way to do this without pdf.js.
silverwind wrote 1 day ago:
[1] works well as a wrapper around the tag. No mobile support
though.
HTML [1]: https://www.npmjs.com/package/pdfobject
rahkiin wrote 1 day ago:
This made me try it once more and I got something to work with some
Blobs, resource URLs, sanitazion and iframes.
So I guess it is possible
tripplyons wrote 1 day ago:
Yeah, blobs seem like the right way to do it.
rahkiin wrote 1 day ago:
There does not seem to be a way to configure anything though.
It looks quite bad with the default zoom level and the
toolbarâ¦
moi2388 wrote 1 day ago:
The html object tag can just show a pdf file by default. Just fetch
it and pass the source there.
What is the problem with that exactly in your case?
jrochkind1 wrote 1 day ago:
I think it can't do that on iOS? Don't know if that is the
relevant thing in the choice being discussed though. Not sure
about Android.
buibuibui wrote 1 day ago:
I find the Nextcloud client really buggy on the Mac, especially the VFS
integration. The file syncing is also really slow. I switched back to
P2P file syncing via Syncthing and Resilio Sync out of frustration.
RiverCrochet wrote 1 day ago:
I've played around with many self-hosted file manager apps. My first
one was Ajaxplorer which then became Pydio. I really liked Pydio but
didn't stick with it because it was too slow. I briefly played with
Nextcloud but didn't stick with it either.
Eventually I ran into FileRun and loved it, even though it wasn't
completely open source. FileRun is fast, worked on both desktop and
mobile via browser nicely, and I never had an issue with it. It was
free for personal use a few years ago, and unfortunately is not
anymore. But it's worth the license if you have the money for it.
I tried setting up SeaFile but I had issues getting it working via a
reverse proxy and gave up on it.
I like copyparty ( [1] ) - really dead simple to use and quick like
FIleRun - but the web interface is not geared towards casual users. I
also miss Filerun's "Request a file" feature which worked very nicely
if you just wanted someone to upload a file to you and then be done.
HTML [1]: https://github.com/9001/copyparty
tripflag wrote 1 day ago:
> I also miss Filerun's "Request a file" feature which worked very
nicely if you just wanted someone to upload a file to you and then be
done.
With the disclaimer that I've never used Filerun, I think this can be
replicated with copyparty by means of the "shares" feature (--shr).
That way, you can create a temporary link for other people to upload
to, without granting access to browse or download existing files. It
works like this:
HTML [1]: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44
t_mann wrote 1 day ago:
Copyparty can't (and doesn't want to) replace Nextcloud for many use
cases because it supports one-way sync only. The readme is pretty
clear about that. I'm toying with the idea of combining it with
Syncthing (for all those devices where I don't want to do a full
sync), does anybody have experience with that? I've seen some posts
that it can lead to extreme CPU usage when combined with other tools
that read/write/index the same folders, but nothing specifically
about Syncthing.
tripflag wrote 1 day ago:
Combining copyparty with Syncthing is not something I have tested
extensively, but I know people are doing this, and I have yet to
hear about any related issues. It's also a usecase I want to
support, so if you /do/ hit any issues, please give word! I've
briefly checked how Syncthing handles the symlink-based file
deduplication, and it seemed to work just fine.
The only precaution I can think of is that copyparty's .hist folder
should probably not be synced between devices. So if you intend to
share an entire copyparty volume, or a folder which contains a
copyparty volume, then you could use the `--hist` global-option or
`hist` volflag to put it somewhere else.
As for high CPU usage, this would arise from copyparty deciding to
reindex a file when it detects that the file has been modified.
This shouldn't be a concern unless you point it at a folder which
has continuously modifying files, such as a file that is currently
being downloaded or otherwise slowly written to.
accrual wrote 1 day ago:
On the topic of self-hosted file manager apps, I've really liked
"filebrowser". Pair it with Syncthing or another sync daemon and
you've got a minimal self-hosted Dropbox clone.
* [1] *
HTML [1]: https://github.com/filebrowser/filebrowser
HTML [2]: https://github.com/hurlenko/filebrowser-docker
iN7h33nD wrote 20 hours 24 min ago:
Same. Just recently switch over to filebrowser-quantum. Canât
quite endorse it yet, but itâs promising so far (setup in a
docker compose was a bit like wack-a-mole, but so was the original)
HTML [1]: https://github.com/gtsteffaniak/filebrowser
Yie1cho wrote 1 day ago:
nextcloud just feels abandoned, even if it isn't of course.
maybe paying customers are getting a different/updated/tuned version of
it. maybe not. but the only thing that keeps me using it is there isn't
any real selfhosted alternatives.
why is it slow? if you just blink or take a breath, it touches the
database. years ago i've tried to optimise it a bit and noticed that
there are horrible amount of DB transactions there without any apparent
reason.
also, the android client is so broken...
MrDresden wrote 1 day ago:
I'm not sure why you feel like it is abandoned. There is a steady
release cadence and the changelog[0] clearly shows that much is being
worked on.
[0]:
HTML [1]: https://nextcloud.com/changelog/#latest32
estimator7292 wrote 1 day ago:
Because it feels worse and more broken as time goes on. Just like
any other abandoned web app, except it's being made worse and
slower as an active, deliberate, ongoing choice
Yie1cho wrote 1 day ago:
yes of course there's progress and new features and it's not really
abandoned per se.
but the feeling is that the outdated or simply bad decisions aren't
fixed or redesigned.
it could be made 100 times better.
palata wrote 1 day ago:
I would love to like Nextcloud, it's pretty great that it does exist.
Just that makes it better than... well everything else I haven't found.
What frustrates me is that it looks like it works, but once in a while
it breaks in a way that is pretty much irreparable (or at least not in
a practical way).
I want to run an iOS/Android app that backs up images on my server. I
tried the iOS app and when it works, it's cool. It's just that once in
a while I get errors like "locked webdav" files and it never seems to
recover, or sometimes it just stops synchronising and the only way to
recover seems to be to restart the sync from zero. It will gladly
upload 80GB of pictures "for nothing", discarding each one when it
arrives on the server because it already exists (or so it seems, maybe
it just overwrites everything).
The thing is that I want my family to use the app, so I can't access
their phone for multiple hours every 2 weeks; it has to work reliably.
If it was just for backing up my photos... well I don't need Nextcloud
for that.
Again, alternatives just don't seem to exist, where I can install an
app on my parent's iOS and have it synchronise their photo gallery in
the background. Except I guess iCloud, that is.
ergocoder wrote 2 hours 15 min ago:
> The majority of CEO job is excellent judgement and motivating
people.
Ain't that the problem with everything. They all look good on paper
until you try it for a while.
cess11 wrote 9 hours 0 min ago:
WebDAV is a nightmare, breaks when you least need it. Once I moved a
few TB over it, it took a week with all the retries and
troubleshooting.
As I understand it you can work around it with Nextcloud by running
some other transfer service and have it watch and automatically
import certain directories.
zelphirkalt wrote 15 hours 37 min ago:
You could set up Syncthing. Once properly configured (including
ignored files, that have names that cannot be handled by the backing
storage or clients), you shouldn't need to touch it much.
jjav wrote 21 hours 59 min ago:
Nextcloud is great, but I don't use it for backup (didn't realize it
would even do that) so maybe that's why.
I use it for a family cloud service for chat, shared todo lists,
shared calendar and shared editing docs (don't want to put anything
private on e.g. google docs).
For all that, it's full of awesome.
jacomoRodriguez wrote 1 day ago:
I switch to FolderSync for the upload from mobile. Works like a
charm!
I know, it sucks that the official apps are buggy as hell, but the
server side is real solid
nolan879 wrote 1 day ago:
This also happened to me with my nextcloud, thankfully I did not lose
any photos. I transitioned to Immich for my photos and have not
looked back.
stavros wrote 1 day ago:
For photos, you can't beat Immich.
pdntspa wrote 1 day ago:
SyncThing
benhurmarcel wrote 1 day ago:
I stopped using Nextcloud when the iOS app lost data.
For some reason the app disconnected from my account in the
background from time to time (annoying but didn't think it was
critical). Once I pasted data on Nextcloud through the Files app
integration, it didn't sync because it was disconnected and didn't
say anything, and it lost the data.
ToucanLoucan wrote 1 day ago:
I never had data outright vanish, but similar to the comment you
replied to, it was just unreliable. I found Syncthing much more
useful over the long haul. The last 3 times I've had to do anything
with it were simply to manage having new machines replace old ones.
Syncthing sadly doesn't let you not download some folders or files,
but I just moved those to other storage. It beats the Nextcloud
headache.
cG_ wrote 14 hours 11 min ago:
I might be misunderstanding what you mean, but maybe the
.stignore[1] file is what you're looking for? Apologies if it
isn't :-)
HTML [1]: https://docs.syncthing.net/users/ignoring.html
ToucanLoucan wrote 14 hours 1 min ago:
Oh no worries, yeah that works like gitignore, Iâm talking
more like how Nextcloud and Dropbox let you like, have a list
of folders and checkboxes where you can be like âthis machine
doesnât need my family photo collection synced to itâ kinda
thing. Which to my knowledge syncthing doesnât have.
Don't apologize tho! I appreciate the help!
miroljub wrote 6 hours 54 min ago:
You can achieve this by having multiple sync folders instead
of one folder with everything. Then you can configure exactly
what you sync where.
xeromal wrote 1 day ago:
Oof, sounds painful. It's hard to use anything when you can't trust
its fundamentals.
exe34 wrote 1 day ago:
I use syncthing, I've got a folder shared between my phone, laptop
and media center, and it just syncs everything easily.
dns_snek wrote 1 day ago:
It works well for smaller folders but it slows down to a crawl with
folders that contain thousands of files. If I add a file to an
empty shared folder it will sync almost instantly but if I take a
photo both sides become aware of the change rather quickly but then
they just sit around for 5 minutes doing nothing before starting
the transfer.
exe34 wrote 1 day ago:
how many thousands? I have a folder with a total of 12760 files
spread within several folders, but the largest I think is the one
with 3827 files.
I've noticed the sync isn't instantaneous, but if I ping one
device from the other, it starts immediately. I think Android has
some kind of network related sleep somewhere, since the two nixos
ones just sync immediately.
dns_snek wrote 1 day ago:
I have around 4000 photos and videos in this folder. I don't
know what it is but I know that it's not a network issue.
I think it takes a long time because the phone CPU is much
slower than the desktop but I couldn't tell you what it's
doing, the status doesn't say anything useful except noting
that files are out of sync and that the other device is
connected.
exe34 wrote 18 hours 42 min ago:
yes I do wish it would say a bit more of what's going on and
have a big button that says "try it now".
kelvinjps10 wrote 1 day ago:
I do the same it's so convenient
pjs_ wrote 1 day ago:
Iâve tried every scheme under the sun and Immich is the only thing
Iâve ever seen that actually works for this use case
Larrikin wrote 1 day ago:
For your specific use case of photos, Immich is the front runner and
a much better experience. Sadly for the general Dropbox replacement I
haven't found anything either.
palata wrote 1 day ago:
Does its iOS/Android app automatically backup the photos in the
background? When I looked into Immich (didn't try it) it sounded
like it was more of a server thing. I need the automation so that
my family can forget about it.
jaden wrote 1 day ago:
I too have found Syncthing + Filebrowser to be a sufficient
substitute for Dropbox.
conradev wrote 1 day ago:
I use Syncthing as a Dropbox replacement, and I like it. I have a
machine at home running it that is accessible over the net. Not the
prettiest, but it works!
cortesoft wrote 1 day ago:
I love immich, too, but I have also ran into a lot of issues with
syncing large libraries. The iPhone app will just hang sometimes.
eptcyka wrote 18 hours 42 min ago:
Since the last major update to 2.0, it has gotten immensely
better. Whereas before the app was hung for 30 seconds on startup
and would only reliably sync in the foreground for my partner, it
now just works. Can open, syncs in the background. Never had such
issues on my phone, probably the size of your collection matters
here.
palata wrote 1 day ago:
Does it recover though, or do you end up in situations where your
setup is essentially broken?
Like if I backup photos from iOS, then remove a subset of those
from iOS to make space on the phone (but obviously I want to keep
them on the cloud), and later the mobile app gets out of sync, I
don't want to end up in a situation where some photos are on iOS,
some on the cloud, but none of the devices has everything, and I
have no easy way to resync them.
localtoast wrote 1 day ago:
I have found adding the following four lines to the immich
proxy host in nginx proxy manager (advanced tab) solved my
immich syncing issues:
client_max_body_size 50000M;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
send_timeout 600s;
FWIW, my library is about 22000 items large. Hope this helps
someone.
cortesoft wrote 1 day ago:
It won't recover unless I do something... sometimes just
quitting the iPhone app and then toggling enabling backups
works, but not always. I had to completely delete and reinstall
the app once to get it to work, and had to resync all 45000
images/videos I had.
I have had the server itself fail in strange ways where I had
to restart it. I had to do a full fresh install once when it
got hopelessly confused and I was getting database errors
saying records either existed when they shouldn't or didn't
exist when they should.
I think I am a pretty skilled sysadmin for these types of
things, having both designed and administered very large
distributed systems for two decades now, but maybe I am doing
things wrong, but I think there are just some gotchas still
with the project.
palata wrote 1 day ago:
Right, that's the kind of issues I am concerned about.
iCloud / Google Photos just don't have that, they really
never lose a photo. It's very difficult for me to convince my
family to move to something that may lose their data, when
iCloud / Google Photos works and is really not that
expensive.
cortesoft wrote 1 day ago:
It has gotten more stable as I have used it for a while. I
think if you want to do it, just wait until it is stable
and you have a good backup routine before relying on it.
redrblackr wrote 1 day ago:
There is also "memories for nextcloud" which basically matches
immich in feature set (was ahead until last month),
nextcloud+memories make a very strong replacement for gdrive or
dropbox
palata wrote 1 day ago:
Yeah I guess my issue is that if I can't trust the mobile app not
to lose my photos (or stop syncing, or not sync everything), then
I just can't use it at all. There is no point in having Nextcloud
AND iCloud just because I don't trust Nextcloud :D.
noname120 wrote 1 day ago:
Nextcloud mobile app is crap but fortunately itâs just WebDAV
so you can use any other WebDAV app for synchronization.
palata wrote 1 day ago:
That's a good point! Are there good WebDAV apps
synchronising, say the Photo gallery on iOS, transparently
and always in the background?
noname120 wrote 16 hours 10 min ago:
Unfortunately Apple puts extremely strict restrictions on
background tasks so you will never have something as
seamless as native iCloud or the amazing Android FolderSync
app that I used for realtime synchronization for several
years without a single issue.
I know people work around these iOS limitations by setting
up springboard widgets that piggyback on background refresh
tasks to do uploads. People also create Automator actions
(e.g. run every day at time or location based) in the
Shortcuts app.
I havenât tried it but a popular option on iOS seems to
be:
HTML [1]: https://apps.apple.com/app/photosync-transfer-phot...
treve wrote 1 day ago:
I replaced all my Dropbox uses with SyncThing (and love it). I run
an instance on my server at all times and on every client.
BLKNSLVR wrote 1 day ago:
+1 for SyncThing
I have it installed on my immediate family's devices to ensure
all the photos are auto-backed-up to our NAS (which is then
backed up to another NAS).
I need to check to make sure it's still working once in a while
(every couple of months), but it's usually fine, and even if it's
somehow stopped working, getting it running again catches itself
up to where it should have been anyway.
63stack wrote 1 day ago:
Look into syncthing for a dropbox replacement, have been using it
for years, very satisfied.
layer8 wrote 1 day ago:
If you just need a Dropbox replacement for file syncing,
Nextcloud is fine if you use the native file system integrations
and ignore the web and WebDAV interfaces.
troyvit wrote 1 day ago:
Syncthing is under my "want to like" list but I gave up on it.
I'm a one person show who just wants to sync a few dozen markdown
files across a few laptops and a phone. Every time I'd run it I'd
invariably end up with conflict files. It got to the point where
I was spending more time merging diffs than writing. How it could
do that with just one person running it I have no idea.
zelphirkalt wrote 15 hours 34 min ago:
The conflicts come of course when you edit a file on 2 devices
before Syncthing had a chance to sync them. I mostly solved
this by running Snycthing on a server as well as on clients, so
that at least the server is always online, as a point of
synchronization. So now I only get conflict files, if somehow
my phone doesn't have Internet and I edit files on my phone,
which happens very rarely.
the_pwner224 wrote 1 day ago:
My Syncthing experience matches Oxodao's. Over years with >10k
files / 100 gb, I've only ever had conflicts when I actually
made conflicting simultaneous changes.
I use it on my phone (configured to only sync on WiFi), laptop
(connected 99% of the time), and server (up 100% of the time).
The always-up server/laptop as a "master node" are probably
key.
troyvit wrote 8 hours 6 min ago:
That is good advice from both of you. I knew it has to be me
because it's honestly one of the most successful and popular
open source tools I've worked with. I think I should've made
that more clear in my original comment.
Brian_K_White wrote 1 day ago:
Same. I don't know why so many people like syncthing.
Imustaskforhelp wrote 1 day ago:
I don't think that there is some good alternative to open
source syncthing ,the way syncthing just does syncing no
Let me know if you know of any alternative which have helped
you but I haven't tried syncthing but I have heard good
things about it overall so I feel like I like it already even
if I haven't tried it I guess.
Joeri wrote 1 day ago:
I had this when I had a windows system in the mix. Windows
handles case differently in filenames than linux and macOS, and
it caused conflicts.
Oxodao wrote 1 day ago:
That should not happen. I use it a lot and never had this
issue, there maybe is something wrong about your setup.
A good idea is to have it on an always-on server and add your
share as an encrypted one (like you set the password on all
your apps but not on the server); this pretty much results in a
dropbox-like experience since you have a central place to sync
even when your other devices are not online
nucleardog wrote 1 day ago:
> Sadly for the general Dropbox replacement I haven't found
anything either.
I had really good luck with Seafile[0]. It's not a full groupware
solution, just primarily a really good file syncing/Dropbox
solution.
Upsides are everything worked reliably for me, it was much faster,
does chunk-level deduplication and some other things, has native
apps for everything, is supported by rclone, has a fuse mount
option, supports mounting as a "virtual drive" on Windows, supports
publicly sharing files, shared "drives", end-to-end encryption, and
practically everything else I'd want out of "file syncing
solution".
The only thing I didn't like about it is that it stores all of your
data as, essentially, opaque chunks on disk that are pieced
together using the data in the database. This is how it achieves
the performance, deduplication, and other things I _liked_. However
it made me a little nervous that I would have a tough time
extracting my data if anything went horribly wrong. I took backups.
Nothing ever went horribly wrong over 4 or 5 years of running it. I
only stopped because I shelved a lot of my self-hosting for a bit.
[0]:
HTML [1]: https://www.seafile.com/en/home/
raphman wrote 1 day ago:
I can confirm this. We have been using it for 10 years now in our
research lab. No data loss so far. Performance is great.
Integration with OnlyOffice works quite well (there were sync
problems a few years ago - I think upgrading OnlyOffice solved
this issue).
justinparus wrote 1 day ago:
thanks for sharing. been looking for something like this for
awhile
Semaphor wrote 1 day ago:
Yeah, went with that as well. Itâs blazingly fast compared to
NC.
oompydoompy74 wrote 1 day ago:
Pretty sure that NextCloud uses Seafile behind the scenes
unless Iâm mistaken.
Semaphor wrote 1 day ago:
You are mistaken.
Handy-Man wrote 1 day ago:
Have you looked into [1] ? While it's not drop-in replacement for
Google Drive/Dropbox, it has been serving me well for similar quick
usecase.
HTML [1]: https://filebrowser.org/
thuttinger wrote 1 day ago:
For a general file sharing / storage solution there is also
OpenCloud: [1] It's what I want to try next. Written in go, it
looks promising.
HTML [1]: https://opencloud.eu/de
karamanolev wrote 1 day ago:
Too many Cloud things! OwnCloud, NextCloud, OpenCloud. There
have* to be better names available...
DANmode wrote 5 hours 15 min ago:
Suggest one.
guilamu wrote 1 day ago:
I'd say Ente-photo is at least as good if not better than Immich.
HTML [1]: https://github.com/ente-io/ente
neop1x wrote 15 hours 51 min ago:
I'm also a very happy Ente user. I use Garage for its S3-like
storage, with one of the nodes running on my local network (LAN).
My local DNS (CoreDNS) is also configured to use this local node
for the domain, which makes everything very fast.
palata wrote 1 day ago:
Does it have a mobile app that backs up the photos while in the
background and can essentially be "forgotten"? That's pretty much
what I need for my family: their photos need to get to my server
magically.
omnimus wrote 1 day ago:
Both Ente and Immich have that.
fauigerzigerk wrote 1 day ago:
I'm a very happy Ente Photos user as well.
omnimus wrote 1 day ago:
I would say the opposite. Ente has one huge advantage and that it
is e2ee so it's a must if you are hosting someone else photos.
But if you are planning to run something on your server/NAS for
yourself then Immich has many advantages (that often relate to
the e2ee). For example... your files are still files on the disk
so less worry about something unrecoverably breaking. And you can
add external locations. With Ente it is just about backing up
your phone photos. Immich works pretty well as camera photo
organizer.
dangus wrote 1 day ago:
The Ente desktop app has a continuous export function thatâll
just dump everything into plain file directories.
It makes a little more sense when youâre using their cloud
version, because otherwise youâre storing the data twice.
lompad wrote 1 day ago:
Recently people built a super-lightweigt alternative, named
copyparty[0]. To me that looks like it does everything people tend to
need without all the bloat.
[0]:
HTML [1]: https://github.com/9001/copyparty
hebelehubele wrote 18 hours 37 min ago:
It's an amazing piece of software. If only the code & the
configuration was readable. It's overly reliant on 2-3 letter
abbreviations, which I'm sure has a system, but I haven't yet been
able to decipher.
peanut-walrus wrote 20 hours 32 min ago:
Personally, the only thing I need is stable clients on both desktop
and mobile with bidirectional sync. Copyparty seems really cool,
but it explicitly does not do that.
wltr wrote 17 hours 49 min ago:
Have you considered syncthing? Thereâs shiny new and super cool
Sushi Train (or Sync Train by other name) app for iOS (I wish the
author would make it a paid app, so much I like it!): [1] Not
affiliated, but a very happy user.
I mention iOS, because that was what I needed personally, as
there was syncthing for Android since forever.
HTML [1]: https://github.com/pixelspark/sushitrain
Dylan16807 wrote 1 day ago:
> everything people tend to need
> NOTE: full bidirectional sync, like what nextcloud and syncthing
does, will never be supported! Only single-direction sync
(server-to-client, or client-to-server) is possible with copyparty
Is sync not the primary use of nextcloud?
scrollop wrote 1 day ago:
Copyparty looks amazing, wow
HTML [1]: https://www.youtube.com/watch?v=15_-hgsX2V0
ryandrake wrote 1 day ago:
I watched the video, too, and while amazing, it's the poster
child for feature creep. It starts out as a file server, and at
some point in the demo it's playing transcoded media and editing
markdown??
Really impressive, but I think I'll stick to NFS.
chappi42 wrote 1 day ago:
This is not an alternative as it only covers files. Mind what is in
the article: "I like what Nextcloud offers with its feature set and
how easily it replaces a bunch of services under one roof (files,
calendar, contacts, notes, to-do lists, photos etc.), but ".
For us Nextcloud AIO is the best thing under the sun. It works
reasonably well for our small company (about 10 ppl) and saves us
from Microsoft. I'm very grateful to the developers.
Hopefully they are able to act upon such findings or rewrite it
with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money
in ill-advised ideology-driven and long-term state-destroying
actions and "NGOs" they had enough money to fund 100s of such
rewrites. Alas...
j-krieger wrote 13 hours 14 min ago:
Germany does fund and work on a couple of serious OSS projects.
Look for Opencode. They are also actively working on the matrix
spec.
upboundspiral wrote 1 day ago:
I think what you described is basically ownCloud Infinite Scale
(ocis). I haven't tested it myself but it's something I've been
considering. I run normal owncloud right now over nextcloud as it
avoided a few hiccups that I had.
preya2k wrote 1 day ago:
OCIS seems to have lost most of their team. They now work on a
fork called OpenCloud.
HTML [1]: https://github.com/opencloud-eu
lachiflippi wrote 1 day ago:
Why should Germany be wasting public money on a private company
who keeps shoveling more and more restrictions on their
open-source-washed "community" offering, and whose "enterprise"
pricing comes in at twice* the price MS365 does for fewer
features, worse integration, and with added costs for hosting,
storage, and maintenance?
* or same, if excluding nextcloud talk, but then missing a chat
feature
chappi42 wrote 1 day ago:
It makes a lot of sense for Germany to keep some independance
from foreign proprietary cloud providers (Microsoft, Google);
Money very well invested imo. It helps the local industry and
data stays under German sovereignity.
I find your "open-source-washed" remark deplaced and quite
deragoraty. Nextcloud is, imo, totally right to (try to)
monetize. They have to, they must further improve the technical
backbone to stay competitive with the big boys.
redrblackr wrote 1 day ago:
Could you expand on what restrictions they have placed on the
community version?
lachiflippi wrote 1 day ago:
At the very least their app store, which is pretty much
required for OIDC, most 2FA methods, and some other features,
stops working at 500 users. AFAIK you can still manually
install addons, it's just the integration that's gone, though
I'm not 100% sure. Same with their notification push service
(which is apparently closed source?[0]), which wouldn't be as
much of an issue if there were proper docs on how to stand up
your own instance of that.
IIRC they also display a banner on the login screen to all
users advertising the enterprise license, and start emailing
enterprise ads to all admin users.
Their "fair use policy"[1] also includes some "and more"
wording.
[0] [1]
HTML [1]: https://github.com/nextcloud/notifications/issues/82
HTML [2]: https://nextcloud.com/fairusepolicy/
akoboldfrying wrote 1 day ago:
> their app store, which is pretty much required for OIDC,
most 2FA methods, and some other features, stops working at
500 users
How dare they. I just want to share photos and calendar
with the 502 people in my immediate family.
lachiflippi wrote 14 hours 48 min ago:
This may come as a surprise to you, but there are
organizations, for example German municipalities, that
have more than 500 users but can't afford to start
pumping tens or hundreds of thousands per year into a
file sharing service. Nextcloud themselves recognize this
and offer 95%+ discounts to edu, similar to what Adobe,
Microsoft, and Git[Hub,Lab] are doing.
cbondurant wrote 1 day ago:
It makes perfect sense to me that nextcloud is a good fit for a
small company.
My biggest gripe with having used it for far longer than I should
have was always that it expected far too much maintenance (4
month release cadence) to make sense for individual use.
Doing that kind of regular upkeep on a tool meant for a whole
team of people is a far more reasonable cost-benefit analysis.
Especially since it only needs one technically savvy person
working behind the scenes, and is very intuitive and familiar on
its front-end. Making for great savings overall.
TuningYourCode wrote 1 day ago:
Hetznerâs storage share product line offers a managed
Nextcloud instance. Iâm using them as I didnât want to care
about updating it myself.
The only downside is you canât use apps/plugins which require
additional local tools (e.g. ocrmypdf) but others can be used
just fine.
Calling remotely hosted services works (e.g. if you have
elasticsearch on an vps and setup the Nextcloud fulltext search
app accordingly)
mynameisvlad wrote 1 day ago:
There is no way itâs going to be completely rewritten from
scratch in Go, and none of whatever Germany is or isnât doing
affects that in any way shape or form.
preya2k wrote 1 day ago:
Actually, it's already been done by the former Nextcloud
fork/predecessor. OwnCloud shared a big percentage of the
Nextcloud codebase, but they decided to rewrite everything
under the name OCIS (OwnCloud Infinite Scale) a couple of years
ago. Recently, OwnCloud got acquired by Kiteworks and it seemed
like they got in a fight with most of the staff. So big parts
of the team left to start "OpenCloud", which is a fork of OCIS
and is now a great competitor to Nextcloud. It's much more
stable and uses less resources, but it also does a lot less
than Nextcloud (namely only File sharing so far. No Apps, no
Groupware.)
HTML [1]: https://github.com/opencloud-eu
mynameisvlad wrote 23 hours 20 min ago:
OCIS does only a small part of why people deploy NextCloud. I
have run it, itâs great, but itâs not a replacement for
the full suite nor is it trying to be.
brendoelfrendo wrote 23 hours 37 min ago:
I have OpenCloud working on my home server, and it features
integration with the Collabora suite of software for office
apps. Draw.io is also already supported.
brnt wrote 16 hours 22 min ago:
They offer a Docker compose file that sets up Collabora for
you, but I can't find anything info on other apps, let
alone integration. Where can I see what they support?
brendoelfrendo wrote 5 hours 13 min ago:
You're right, it was my mistake. The docker compose file
can set up Collabora for you and allows you to open
documents from inside OpenCloud by opening the file in an
embedded Collabora view. Likewise, Draw.io works in a
similar fashion, opening a view to embed.diagrams.net.
Underneath it's just hosting the files and offloads the
operations to other apps. It's convenient, but not
particularly sophisticated.
preya2k wrote 16 hours 7 min ago:
There are no "Apps". It's not a universal App platform
like Nextcloud. It's just file sharing (and optionally a
Radicale calender server via Environment Variable but
without UI). There's optional plugins to open vendor
specific files right in the browser.
hadlock wrote 1 day ago:
Thanks for sharing this, I've been wanting to look at private
cloud stuff but it was all written in PHP. It looks like
OpenCloud is majority Go with some php and gherkin, which is
a step in the right direction.
seemaze wrote 1 day ago:
I found copyparty to be too busy on the UI/UX side of things. I've
settled on dufs[0], quick to deploy, fast to use use, and cross
platform.
[0]
HTML [1]: https://github.com/sigoden/dufs
davidcollantes wrote 1 day ago:
Do you have a systemd for it, run it with Docker, or simply
manually as needed? I find its simplicity perfect!
seemaze wrote 1 day ago:
I run it manually as needed. It's already packaged for both
Alpine Linux and Homebrew which suits my ad-hoc needs
wonderfully!
nucleardog wrote 1 day ago:
I think "people" deserves clarification: Almost the entire thing
was written by a single person and with a _seriously_ impressive
feature set. The launch video is well worth a quick watch: [1] I
don't say this to diminish anyone else's contribution or criticize
the software, just to call out the absolutely herculean feat this
one person accomplished.
HTML [1]: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXB...
flanbiscuit wrote 20 hours 40 min ago:
There was an HN discussion about it 3 months ago with responses
from the author, in case anyone is interested:
HTML [1]: https://news.ycombinator.com/item?id=44711519
mouse-5346 wrote 1 day ago:
Yeah people there pretty much mean one dude. It's mine boggling
how much that little program can do considering it had one dev.
tspng wrote 1 day ago:
Don't forget, "Lot of the code was written on a mobile phone
using tmux and vim on a bus".
That's crazy.
Imustaskforhelp wrote 1 day ago:
I have tried to run micro [1] on my phone but this is some
other beast if someone is running tmux and vim on their phone
I have found that typing normally is really preferably on
android and usually I didn't like having to press columns or
ctrl or anything so as such since micro is really just such a
great thing overall, it fit so perfectly that when I had that
device, I was coding more basic python on my phone than I was
on my pc
Although back then I was running alpine on UserLand and I
learnt a lot trying to make that alpine vm of sorts to work
with python as it basically refused to and I think I learnt a
lot which I might have forgotten now but the solution was
very hacky (maybe gcompat) and I liked it
HTML [1]: https://micro-editor.github.io/
cess11 wrote 8 hours 58 min ago:
I do a lot of development and sysadmin stuff on phones and
tablets, to a large degree due to PentiKeyboard. It helps a
lot to see the entire screen and have all the usual
keyboard sends that a regular, physical keyboard has.
HTML [1]: https://software-lab.de/penti.html
dade_ wrote 1 day ago:
The next cloud android app is particularly bad if you use it to back
up your cameras DCIM directory then you delete the photos on your
phone. It overwrite the files on Nextcloud as new photos are taken.
I get why this happened but it is terrible.
branon wrote 23 hours 4 min ago:
Will this also happen if you let the Nextcloud app rename the files
as it uploads them? I usually take that option and haven't had an
issue with this although I don't have it set to delete from my
phone after uploading.
Yie1cho wrote 1 day ago:
it's bad for everything.
i have lots of txt files on my phone which are just not synced up
to my server (the files on the server are 0 byte long).
i'm using txt files to take notes because the Notes app never
worked for me (I get sync errors on any android phone while it
works on iphone).
ivolimmen wrote 1 day ago:
On the same note a jira ticket as configured where I work the entire
page is 42mb. And I use ad blockers so I already skip the page counting
stuff
freefaler wrote 1 day ago:
Wow, that's a lot.
Our local installation zero cache request (to not suffer their
slooooow cloud):
82 / 86 requests
1,694 kB / 1,754 kB transferred
6,220 kB / 6,281 kB resources
Finish: 11.73 s
DOMContentLoaded: 1.07 s
Load: 1.26 s
esafak wrote 1 day ago:
Does anyone know what they are doing wrong to create such large
bundles? What is the lesson here?
eMerzh wrote 1 day ago:
I think, some of the issues here is that first nextcloud tries to be
compatible with any managed / mutualized hosting.
They also treat every "module"/"apps" whatever you call it, as
completely distinct spa without proving much of a sdk/framework.
Which mean each app, add is own deps, manage is own build, etc...
Also don't forget that app can even be a part of a screen not the
whole thing
bastawhiz wrote 1 day ago:
Not paying attention.
1. Indiscriminate use of packages when a few lines of code would do.
2. Loading everything on every page.
3. Poor bundling strategy, if any.
4. No minification step.
5. Polyfilling for long dead, obsolete browsers
6. Having multiple libraries that accomplish the same thing
7. Using tools and then not doing any optimization at all (like using
React and not enabling React Runtime)
Arguably things like an email client and file storage are apps and
not pages so a SPA isn't unreasonable. The thing is, you don't end up
with this much code by being diligent and following best practices.
You get here by being lazy or uninformed.
nullgeo wrote 1 day ago:
What is React runtime? I looked it up and the closest thing I came
across is the newly announced React compiler. I have a vested
interest in this because currently working on a micro-SaaS that
uses React heavily and still suffering bundle bloat even after
performing all the usual optimizations.
adzm wrote 1 day ago:
React compiler is awesome for minimizing unnecessary renders but
doesn't help with bundle size; might even make it worse. But in
my experience it really helps with runtime performance if your
code was not already highly optimized.
bastawhiz wrote 1 day ago:
When you compile JSX to JavaScript, it produces a series of
function calls representing the structure of the JSX. In a recent
major version, React added a new set of functions which are more
efficient at both runtime and during transport, and don't require
an explicit import (which helps cut down on unnecessary
dependencies).
silverwind wrote 1 day ago:
You mean the automatic runtime introduced in 2020. It does not
have any impact on the performance, it's just a pure developer
UX improvement.
bastawhiz wrote 22 hours 25 min ago:
It improves the bundle size for most apps because the
imported functions can be minified better. Depending on your
bundler, it can avoid function calls at runtime.
jrochkind1 wrote 1 day ago:
I'm curious how much Javascript eg gmail and google docs/drive give
you, in comparison.
tracker1 wrote 1 day ago:
I just checked google calendar it's under 3mb download for js (around
8mb uncompressed).. it's also a lot more responsive than nextcloud
web. Even then, it's not necessarily the size, I think that's mostly
a symptom of the larger issues likely at play.
There are a lot of requests made in general, these can be good, bad
or indifferent depending on the actual connection channels and
configuration with the server itself. The pieces are too
disconnected from each other... the NextCloud org has 350
repositories on Github. I'm frankly surprised it's more than 30 or
so... it's literally 10x what would be a larger expectation... I'd
rather deal with a crazy mono-repo at that point.
jrochkind1 wrote 1 day ago:
OP really focused on payload size, is why I was curious.
> On a clean page load [of nextcloud], you will be downloading
about 15-20 MB of Javascript, which does compress down to about 4-5
MB in transit, but that is still a huge amount of Javascript. For
context, I consider 1 MB of Javascript to be on the heavy side for
a web page/app.
> â¦Yes, that Javascript will be cached in the browser for a
while, but you will still be executing all of that on each visit to
your Nextcloud instance, and that will take a long time due to the
sheer amount of code your browser now has to execute on the page.
While Nextcloud may have a ~60% bigger JS payload, sounds like
perhaps that could have been a bit of a misdirection/misdiagnosis,
and it's really about performance characteristics of the JS rather
than strictly payload size or number of lines of code executed.
On a Google Doc load chosen by whatever my browser location bar
autocompleted, I get around twenty JS files, the two biggest are
1MB and 2MB compressed.
tracker1 wrote 1 day ago:
Yeah, without a deeper understanding it's really hard to say...
just the surface level look, I'm not really at all interested in
diving deeper myself. I'd like to like it... I tried out a test
install a couple times but just felt it was clunky. Having a
surface glance at the org and a couple of the projects, it
doesn't surprise me that it felt that way.
a3w wrote 1 day ago:
gmail should be server sided, with as much JS as you want to use.
Unless they moved away from the philosophy they started with GWT
(Google Web Toolkit) for Gmail, and perhaps even Inbox (RIP)
tokarf wrote 1 day ago:
Nextcloud not perfect but it's still one of a major project that has
not shifted to business oriented licence and where all components are
available and not paywalled with enterprise edition.
So yes not perfect, bloated js but it works and is maintained.
So I'd rather thanks all developers involved in nextcloud than whine
about bloated js.
Propelloni wrote 1 day ago:
That's not quite right. There are features that are only available to
enterprise customers, or require proprietary plug-ins like Sendent.
Do I need them for my home server? No. Do I need them for my company?
Yes, but costs compared to MS 365 are negligible.
yupyupyups wrote 1 day ago:
>So I'd rather thanks all developers involved in nextcloud than whine
about bloated js.
Good news! You can do both.
bArray wrote 1 day ago:
NextCloud does feel slow. What I want is not only a cloud service that
does lots of common tasks, but it also should do it lightly and simply.
I'm extremely tempted to write a lightweight alternative. I'm thinking
sourcehut [1] vs GitHub.
HTML [1]: https://sourcehut.org/
preya2k wrote 1 day ago:
Take a look at OpenCloud. It's a Go-based rewrite of the former
OwnCloud team.
It works very well, has polished UI and uses very little resources.
It also does a lot less than Nextcloud.
HTML [1]: https://github.com/opencloud-eu
tokarf wrote 1 day ago:
Just compare comparable products.
Nextcloud is an old product that inherit from Owncloud developed in
php since 2010.
It has extensibility at its core through the thousands of extensions
available.
So yaaay compare it with source hut ...
bArray wrote 1 day ago:
> Just compare comparable products.
> So yaaay compare it with source hut ...
I'm not saying that sourcehut is the same in any way, but I want
the difference between GitHub and sourcehut to be the difference
between NextCloud and alternative.
> Nextcloud is an old product that inherit from Owncloud developed
in php since 2010.
Tough situation to be in, I don't envy it.
> It has extensibility at its core through the thousands of
extensions available.
Sure, but I think for some limited use cases, something better
could be imagined.
bn-usd-mistake wrote 1 day ago:
Aren't you just confirming the parent that Nextcloud is the big,
feature-rich behemoth like Github?
alecsm wrote 1 day ago:
Maybe that's the problem "old product that inherit from Owncloud".
mickael-kerjean wrote 1 day ago:
I made one such lightweight alternative frontend:
HTML [1]: https://github.com/mickael-kerjean/filestash
jhot wrote 22 hours 15 min ago:
I've been running filestash in front of sftpgo (using a combination
of s3 and nfs for file backends) for a couple years now and have
been very happy with it.
branon wrote 1 day ago:
I have been considering [1] + Immich as an alternative
Nextcloud's client support is very good though and it has some great
apps, I use PhoneTrack on road trips a lot
HTML [1]: https://bewcloud.com/
troyvit wrote 1 day ago:
> I use PhoneTrack on road trips a lot
If every aspect of Nextcloud was as clean, quick and light-weight as
PhoneTrack this world would be a different place. The interface is a
little confusing but once I got the hang of it it's been awesome and
there's just nothing like it. I use an old phone in my murse with
PhoneTrack on it and that way if I leave it on the bus (again) I
actually have a chance of finding it.
No $35/month subscription, and I'm not sharing my location data with
some data aggregator (aside from Android of course).
glenstein wrote 1 day ago:
Fantastic recommendation, it's like exactly what the doctor ordered
given the premise of this thread. Does Bewcloud play nice with DAV or
other open protocols or (dare I hope) nextcloud apps? I wouldn't mind
using nextcloud apps paired with a better web front end.
zeagle wrote 1 day ago:
Immich is a night and day improvement for photos vs nextcloud. You
could roll it in addition if you wanted to try.
andai wrote 1 day ago:
For reference, 20 MB is three hundred and thirteen Commodores.
magicalhippo wrote 1 day ago:
Or the same number of 64k intros[1][2][3]...
[1] [2]
HTML [1]: https://www.youtube.com/watch?v=iXgseVYvhek
HTML [2]: https://www.youtube.com/watch?v=ZWCQfg2IuUE
HTML [3]: https://www.youtube.com/watch?v=4lWbKcPEy_w
mrweasel wrote 1 day ago:
The article suggests that it takes 14MB of Javascript to do just the
calendar. I doubt that all of my calendar events for 2025 is 14MB.
chaostheory wrote 1 day ago:
Sure, but what people leave out is that itâs mostly C and assembly.
That just isnât realistic anymore if you want a better developer
experience that leads to faster feature rollout, better security, and
better stabilty.
This is like when people reminisce about the performance of windows
95 and its apps while forgetting about getting a blue screen of death
every other hour.
tracker1 wrote 1 day ago:
I think it's a double edged sword of Open-Source/FLOSS... some
problems are hard and take a lot of effort. One example I
consistently point to is core component libraries... React has MUI
and Mantine, and I'm not familiar with any open-source alternatives
that come close. As a developer, if there was one for
Leptos/Yew/Dioxus, I'd have likely jumped ship to Rust+WASM.
They're all fast enough with different advantges and disadvantages.
All said... I actually like TypeScript and React fine for teams of
developers... I think NextCloud likely has coordination issues that
go beyond the language or even libraries used.
trashb wrote 1 day ago:
Exactly javascript is a higher level language with a lot of
required functionality build in. When compared to C you would need
to (for most tasks) write way less actual code in javascript to
achieve the same result, for example graphics or maths routines.
Therefore it's crazy that it's that big.
magicalhippo wrote 1 day ago:
Windows 2000 was quite snappy on my Pentium 150, and pretty rock
solid. It was when I stopped being good at fixing computers because
it just worked, so I didn't get much practice.
chaostheory wrote 1 day ago:
Win2000 is in the same class as Win95 despite being slightly more
stable. It still locked up and crashed more frequently than
modern software.
magicalhippo wrote 1 day ago:
Then you did something special. For me Win2k was at least three
orders of magnitude more stable, and based on my buddies that
was not exceptional.
tracker1 wrote 1 day ago:
I did get a BSOD from a few software packages in Win2k, but it
was fewer and much farther between than Win9x/me... I didn't
bump to XP until after SP3 came out... I also liked Win7 a lot.
I haven't liked much of Windows since 7 though.
Currently using Pop + Cosmic.
robin_reala wrote 1 day ago:
The complete Doom 2, including all graphics, maps, music and sound
effects, shipped on 4 floppies, totalling 5.76MB.
zdragnar wrote 1 day ago:
The original Doom 2 ran 64,000 pixels (320x200). 4k UHD monitors
now show 8.3 million pixels.
YMMV.
Of course, Doom 2 is full of Carmack shenanigans to squeeze every
possible ounce of performance out of every byte, written in hand
optimized C and assembly. Nextcloud is delivered in UTF-8 text, in
a high level scripting language, entirely unoptimized with lots of
low hanging fruit for improvement.
ekjhgkejhgk wrote 1 day ago:
You know apps don't store pixels, right? So why are you counting
pixels?
zdragnar wrote 1 day ago:
A single picture that looks decent on a modern screen, taken
from a modern camera, can easily be larger than the original
Doom 2 binary.
ekjhgkejhgk wrote 1 day ago:
You don't need pictures for a CRUD app. Should all be
vectorial in any case.
trashb wrote 1 day ago:
Sure but i doubt there is more image data in the delivered
nextcloud data compared to doom2, games famously need textures
where a website usually needs mostly vector and css based
graphics.
Actually Carmack did squeeze every possible ounce of performance
out of DOOM, however that does not always mean he was optimizing
for size.
If you want to see a project optimized for size you might check
out ".kkrieger" from ".theprodukkt" which accomplishes a 3d
shooter in 97,280bytes.
You know how many characters 20MB of UTF-8 text is right? If we
are talking about javascript it's probably mostly ascii so quite
close to 20 million characters. If we take a wild estimate of 80
characters per line that would be 250000 lines of code.
I personally think 20MB is outrageous for any website, webapp or
similar. Especially if you want to offer a product to a wide
range of devices on a lot of different networks. Reloading a huge
chunk of that on every page load feels like bad design.
Developers usually take for granted the modern convenience of a
good network connection, imagine using this on a slow connection
it would be horrid.
Even in the western "first world" countries there are still quite
some people connecting with outdated hardware or slow
connections, we often forget them.
If you are making any sort of webapp you ideally have to think
about every byte you send to your customer.
Yie1cho wrote 1 day ago:
yes, but why isn't it optimised? not as extreme as doom had to
be, but to be a bit better? especially the low hanging fruits.
this is why i think there's another version for customers who are
paying for it, with tuning, optimization, whatever.
hamburglar wrote 1 day ago:
I mean, if youâre going to include carmackâs relentless
optimizer mindset in the description, I feel like your
description of the NextCloud situation should probably end with
âand written by people who think shipping 15MB of JavaScript
per page is reasonable.â
mlok wrote 1 day ago:
Could an installable PWA solve this ?
thesuitonym wrote 1 day ago:
> Could ignoring the problem solve this ?
ilumanty wrote 1 day ago:
Could more diligence in the codebase solve this?
floundy wrote 1 day ago:
I'm still setting up my own home server, adding one functionality at a
time. I wanted to like Nextcloud but it's just too bloated.
Radicale is a good calendar replacement. I'd rather have
single-function apps at this point.
servercobra wrote 1 day ago:
Any good file syncing/drive replacements? My Synology exists pretty
much because Synology Drives works so well syncing Mac and iOS.
imcritic wrote 1 day ago:
Unison. Unfortunately it has no mobile apps, though.
thesuitonym wrote 1 day ago:
rsync, ftp, and smb have all existed for decades and work very well
on spotty, slow connections (maybe not smb) and are very, very
small utilities.
lompad wrote 1 day ago:
Copyparty. Found that recently and absolutely love it.
zeagle wrote 1 day ago:
I went from cloud to local smb shares to nextcloud to seafile.
Really happy with the latter. Works, no bloat, versioning and some
file sharing. The pro version is free with 3 or less usernames. I
use the cli client to mount the libraries into folders and share
that with smb + subst X: into the root directory on laptops for
family. Borgbackup of that offsite for backup.
sira04 wrote 1 day ago:
Pretty happy with Resilio Sync. I use it on Mac, and linux in a
docker container.
imcritic wrote 1 day ago:
It is proprietary: it has words license and price on their page
=> crapware.
Saris wrote 1 day ago:
Syncthing is great, but doesnt offer selective sync or virtual
files if you need those features.
Owncloud infinite scale might be the best option for a full
featured file sync setup, as thats all it does.
danielcberman wrote 1 day ago:
Itâs not selective sync, but you can get something similar with
Ignore Files [1] in SynchThing. This functionality can also be
configured via the webGUI and within apps such as MobiusSync [2].
1. [1] 2.
HTML [1]: https://docs.syncthing.net/users/ignoring.html
HTML [2]: https://mobiussync.com
ianopolous wrote 1 day ago:
You might like Peergos, which is E2EE as well. Disclosure (I work
on it). [1] You can try it out easily here: [2] Our iOS app is
still in the works still though.
HTML [1]: https://peergos.org
HTML [2]: https://peergos-demo.net
nickspacek wrote 1 day ago:
I've read good things about Seafile and have considered setting it
up on my Homelab... though when I looked at the documentation, it
too seemed quite large and I worried it wouldn't be the lightweight
solution I'm looking for.
selectodude wrote 1 day ago:
Seafile works pretty well. The iOS app is ass though. Everything
else is rock solid.
rkagerer wrote 1 day ago:
Where does it store metadata like the additional file properties
you can add? Does it use Alternate Data Streams for anything?
Does the AI run locally?
For anyone who might find it useful, here's a Reddit thread from
3 years ago on a few concerns about SeaFile I'd love to see
revisited with some updated discussion:
HTML [1]: https://www.reddit.com/r/selfhosted/comments/wzdp2p/are_...
selectodude wrote 1 day ago:
Seems like the AI runs wherever you want it - you enter an API
endpoint.
HTML [1]: https://manual.seafile.com/13.0/extension/seafile-ai/
FredFS456 wrote 1 day ago:
I think you could replace Nextcloud's syncing and file access use
cases with Syncthing and Copyparty respectively. IMO the biggest
downside is that Copyparty's UX is... somewhat obtuse. It's super
fast and functional, though.
DIR <- back to front page