Discussion:
default file descriptor limit ?
Poul-Henning Kamp
2015-04-13 08:16:36 UTC
Permalink
$ limits
Resource limits (current):
[...]
openfiles 462357

say what ?

This wastes tons of pointless close system calls in programs which
use the suboptimal but best practice:

for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);

For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.

I'm all for autosizing things but this is just plain stupid...
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
***@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Poul-Henning Kamp
2015-04-13 08:22:00 UTC
Permalink
--------
Post by Poul-Henning Kamp
$ limits
[...]
openfiles 462357
say what ?
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
I'm all for autosizing things but this is just plain stupid...
Just to give an idea how utterly silly this is:

#include <stdio.h>
#include <unistd.h>

int
main(int c, char **v)
{
int i, j;

for (j = 0; j < 100; j++)
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
return (0);
}

Linux: 0.001 seconds
FreeBSD: 17.020 seconds


PS: And don't tell me to fix all code in /usr/ports to use closefrom(2).
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
***@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Slawa Olhovchenkov
2015-04-13 08:31:59 UTC
Permalink
Post by Poul-Henning Kamp
--------
Post by Poul-Henning Kamp
$ limits
[...]
openfiles 462357
say what ?
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
This is typical only on startup, I think?
Post by Poul-Henning Kamp
Post by Poul-Henning Kamp
I'm all for autosizing things but this is just plain stupid...
#include <stdio.h>
#include <unistd.h>
int
main(int c, char **v)
{
int i, j;
for (j = 0; j < 100; j++)
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
return (0);
}
Linux: 0.001 seconds
FreeBSD: 17.020 seconds
PS: And don't tell me to fix all code in /usr/ports to use closefrom(2).
% time ./a.out
0.581u 3.302s 0:03.88 100.0% 5+168k 0+0io 0pf+0w
% time limits -n 100 ./a.out
0.000u 0.004s 0:00.00 0.0% 0+0k 0+0io 0pf+0w
% limits
[...]
openfiles 116460

May be now time to introduce new login class, for desktop users, with
reduced limits for open files and some regionals settings. And modify
bsdinstall to support this. And may be some Gnome/KDE tools for
creating users (I am don't use KDE/Gnome).

Base login class ('default') don't touching, don't have limits and
have locale "C", used for system startup and daemons.
Poul-Henning Kamp
2015-04-13 08:39:39 UTC
Permalink
--------
Post by Slawa Olhovchenkov
Post by Poul-Henning Kamp
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
This is typical only on startup, I think?
No. This is mandatory whenever you spawn an sub process with less privilege.
Post by Slawa Olhovchenkov
May be now time to introduce new login class, for desktop users, [...]
How about "now is the time to realize that very few processes need more
than a few tens of filedescriptors" ?

If Linux can manage with a hardcoded default of 1024, so can we...
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
***@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Slawa Olhovchenkov
2015-04-13 08:52:27 UTC
Permalink
Post by Poul-Henning Kamp
--------
Post by Slawa Olhovchenkov
Post by Poul-Henning Kamp
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
This is typical only on startup, I think?
No. This is mandatory whenever you spawn an sub process with less privilege.
Hmm.
1. Whats [linux] application do this?
2. For case of reduce this limit -- how spawned application can
increase this limit, if need? I am not sure, this is posible?
Post by Poul-Henning Kamp
Post by Slawa Olhovchenkov
May be now time to introduce new login class, for desktop users, [...]
How about "now is the time to realize that very few processes need more
than a few tens of filedescriptors" ?
If Linux can manage with a hardcoded default of 1024, so can we...
And have many FAQs "how to overcome this restriction". Including "libc
recompile"
Peter Wemm
2015-04-13 08:55:19 UTC
Permalink
Post by Poul-Henning Kamp
If Linux can manage with a hardcoded default of 1024, so can we...
For what its worth, a random redhat box:
soft limit: 1024
hard limit: 16384

OSX:
soft limit: 256
hard limit: unlimited

8-stable (ref8-amd64.freebsd.org):
soft limit: 11095
hard limit: 11095

9-stable (and later, ref9-amd64.freebsd.org etc):
soft limit: 707058
hard limit: 707058

This is fallout from the retarded maxusers changes a while ago.
--
Peter Wemm - ***@wemm.org; ***@FreeBSD.org; ***@yahoo-inc.com; KI6FJV
UTF-8: for when a ' or ... just won\342\200\231t do\342\200\246
Bruce Evans
2015-04-13 10:15:30 UTC
Permalink
Post by Poul-Henning Kamp
--------
Post by Slawa Olhovchenkov
Post by Poul-Henning Kamp
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
This is typical only on startup, I think?
No. This is mandatory whenever you spawn an sub process with less privilege.
Not quite. sysconf() returns the soft rlimit. Privilege is not need
to change the soft rlimit back and forth between 0 and the hard rlimit.
Post by Poul-Henning Kamp
Post by Slawa Olhovchenkov
May be now time to introduce new login class, for desktop users, [...]
How about "now is the time to realize that very few processes need more
than a few tens of filedescriptors" ?
If Linux can manage with a hardcoded default of 1024, so can we...
RLIM_INFINITY seems reasonable for the hard limit and 1024 for the
soft limit. Large auto-configed values like 400000 are insignificantly
different from infinity anyway. They are per-process, so even the limits
of 11000 on my small systems are also essentially infinite.

There are also the kern.maxfilesperproc and kern.maxfiles limits. These
are poorly implemented, starting with their default values.
maxfilesperproc defaults to the same value as the rlimit. So a single
process that allocates up to its rlimit makes it impossible for any
other process, even privileged ones, to get anywhere near their rlimit.
Some over-commit is needed, but not this much. This has hacks to let
privileged processes allocate a few more descriptors provided priv.
processes never over-commit.

Bruce

O'Connor, Daniel
2015-04-13 08:42:31 UTC
Permalink
Post by Slawa Olhovchenkov
May be now time to introduce new login class, for desktop users, with
reduced limits for open files and some regionals settings. And modify
bsdinstall to support this. And may be some Gnome/KDE tools for
creating users (I am don't use KDE/Gnome).
Base login class ('default') don't touching, don't have limits and
have locale "C", used for system startup and daemons.
The question is: What is the upside of having such a large limit?

The downside is apparent - it's not the memory usage but the time wasted when running secure software since you can't use closefrom because it's not portable and so libraries/ports/etc don't use it (or more realistically Linux doesn't have it).

Other limits like max processes scaling with memory makes sense but maxfiles should probably scale more slowly (or maybe even not at all..)

--
Daniel O'Connor
"The nice thing about standards is that there
are so many of them to choose from."
-- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C
Bruce Evans
2015-04-13 09:46:59 UTC
Permalink
Post by Poul-Henning Kamp
--------
Post by Poul-Henning Kamp
$ limits
[...]
openfiles 462357
say what ?
This wastes tons of pointless close system calls in programs which
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
sysconf() takes about as long as a failing close(), so best practice
is to cache the result of sysconf(). Best practice also requires
error checking.
Post by Poul-Henning Kamp
Post by Poul-Henning Kamp
For reference Linux seems to default to 1024, leaving it up to
massive server processes to increase the limit for themselves.
I'm all for autosizing things but this is just plain stupid...
I would have used the POSIX/C limit of 20 for the default, leaving
it up to mere bloatware to increase the limit. It is too late for
that. Next best is a default of RLIM_INFINITY. In FreeBSD-1,
RLIM_INFINITY was only 32 bits, so was only 5 times larger than
the above. Now it is 64 bits, so it is 20 billion times larger.
Getting the full limit also requires a 64-bit system, since
sysconf() only returns long. sysconf(_SC_OPEN_MAX) doesn't even
work on 32-bit systems if the limit is above LONG_MAX.
Post by Poul-Henning Kamp
#include <stdio.h>
#include <unistd.h>
int
main(int c, char **v)
{
int i, j;
for (j = 0; j < 100; j++)
for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
close(i);
return (0);
}
Linux: 0.001 seconds
FreeBSD: 17.020 seconds
1 millisecond is a lot too.

For full silliness:
- optimize as above so that this takes half as long
- increase the defaullt so that it takes 20 billion times longer.
17.020 / 2 * 20 billion seconds = 5393+ years.
Post by Poul-Henning Kamp
PS: And don't tell me to fix all code in /usr/ports to use closefrom(2).
I don't see any way to fix ports. I few might break with the limit of
1024. The only good thing is that the Linux limit is not very large
and any ports that need a larger limit have probably been made to work
under Linux.

Worse but correct practice is the use the static limit of OPEN_MAX iff
it is defined. Only broken systems like FreeBSD define it if the
static limit is different from the dynamic limit. In FreeBSD, it is
64, so naive software that trusts the limit gets much faster loops than
the above without really trying.

libc sysconf() has poor handling of unrepresentable rlimits in all cases
(just 2 cases; the other one is _SC_CHILD_MAX. The static limit CHILD_MAX
is broken by its existence in FreeBSD in the same way as OPEN_MAX):

X case _SC_OPEN_MAX:
X if (getrlimit(RLIMIT_NOFILE, &rl) != 0)
X return (-1);
X if (rl.rlim_cur == RLIM_INFINITY)
X return (-1);

This is not an error, just an unrepresentable limit. This fails to
set errno to indicate the error (getrlimit() didn't since this is
not an error). This works in practice because it is unreachable
-- the kernel clamps this particular rlimit, so RLIM_INFINITY is
impossible.

X if (rl.rlim_cur > LONG_MAX) {
X errno = EOVERFLOW;
X return (-1);
X }

As above, except it sets errno. If this were reachable, then it
would cause problems for buggy applications that don't check for
errors. But this case shouldn't be an error. LONG_MAX file
descriptors should be enough for anybloatware. When 32-bit
LONG_MAX runs out, the bloatware can simply require a 64-bit
system.

X return ((long)rl.rlim_cur);

Bruce
Loading...