Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in-place editing in 5.28 quickly exhausts open file descriptor limit #16602

Closed
p5pRT opened this issue Jul 1, 2018 · 16 comments
Closed

in-place editing in 5.28 quickly exhausts open file descriptor limit #16602

p5pRT opened this issue Jul 1, 2018 · 16 comments

Comments

@p5pRT
Copy link

p5pRT commented Jul 1, 2018

Migrated from rt.perl.org#133314 (status was 'resolved')

Searchable as RT133314$

@p5pRT
Copy link
Author

p5pRT commented Jul 1, 2018

From mtmiller@debian.org

This is a bug report for perl from mtmiller@​debian.org,
generated with the help of perlbug 1.41 running under perl 5.28.0.


The in-place editing feature in perl 5.28 dies with an error if called
with 1020 or more files on the command line. This is a regression from
5.26, which was able to operate on any number of files on its command
line.

By way of example​:

  $ for n in $(seq 1 1019); do mktemp XXXXXX > /dev/null; done
  $ ls | wc -l
  1019
  $ perl -pi -e s/a/b/ *
  $ rm *
  $ for n in $(seq 1 1020); do mktemp XXXXXX > /dev/null; done
  $ ls | wc -l
  1020
  $ perl -pi -e s/a/b/ *
  Cannot complete in-place edit of ZwuzEb​: failed to rename work file
  'XXVtDC4r' to 'ZwuzEb'​: Bad file descriptor.

With the following modified program, one can look into the /proc
filesystem and see that by the end of the program, the perl process is
holding 1019 open file descriptors to the working directory.

  $ perl -pi -e 's/a/b/; BEGIN {print "$$\n"} END {sleep 60}' *

It looks like perl is doing an opendir on the parent directory of each
file edited with the -i option, but not closing that descriptor before
moving on to the next file in the argument list.

I have verified that this is related to the "safer in-place editing"
feature from #127663. At commit 3ff4feb perl is able to operate on
a list of 100,000 files, and at commit 9c6681c it dies on a list of
1020 files.



Flags​:
  category=core
  severity=medium


Site configuration information for perl 5.28.0​:

Configured by mike at Sat Jun 30 16​:56​:57 PDT 2018.

Summary of my perl5 (revision 5 version 28 subversion 0) configuration​:
  Commit id​: 6cb72a3
  Platform​:
  osname=linux
  osvers=4.16.0-2-amd64
  archname=x86_64-linux
  uname='linux galago.mtmxr.com 4.16.0-2-amd64 #1 smp debian 4.16.12-1 (2018-05-27) x86_64 gnulinux '
  config_args='-des -Dprefix=/tmp/perl'
  hint=recommended
  useposix=true
  d_sigaction=define
  useithreads=undef
  usemultiplicity=undef
  use64bitint=define
  use64bitall=define
  uselongdouble=undef
  usemymalloc=n
  default_inc_excludes_dot=define
  bincompat5005=undef
  Compiler​:
  cc='cc'
  ccflags ='-fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64'
  optimize='-O2'
  cppflags='-fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include'
  ccversion=''
  gccversion='8.1.0'
  gccosandvers=''
  intsize=4
  longsize=8
  ptrsize=8
  doublesize=8
  byteorder=12345678
  doublekind=3
  d_longlong=define
  longlongsize=8
  d_longdbl=define
  longdblsize=16
  longdblkind=3
  ivtype='long'
  ivsize=8
  nvtype='double'
  nvsize=8
  Off_t='off_t'
  lseeksize=8
  alignbytes=8
  prototype=define
  Linker and Libraries​:
  ld='cc'
  ldflags =' -fstack-protector-strong -L/usr/local/lib'
  libpth=/usr/local/lib /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed /usr/include/x86_64-linux-gnu /usr/lib /lib/x86_64-linux-gnu /lib/../lib /usr/lib/x86_64-linux-gnu /usr/lib/../lib /lib /lib64 /usr/lib64
  libs=-lpthread -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lc
  perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
  libc=libc-2.27.so
  so=so
  useshrplib=false
  libperl=libperl.a
  gnulibc_version='2.27'
  Dynamic Linking​:
  dlsrc=dl_dlopen.xs
  dlext=so
  d_dlsymun=undef
  ccdlflags='-Wl,-E'
  cccdlflags='-fPIC'
  lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'


@​INC for perl 5.28.0​:
  /tmp/perl/lib/site_perl/5.28.0/x86_64-linux
  /tmp/perl/lib/site_perl/5.28.0
  /tmp/perl/lib/5.28.0/x86_64-linux
  /tmp/perl/lib/5.28.0


Environment for perl 5.28.0​:
  HOME=/home/mike
  LANG=en_US.UTF-8
  LANGUAGE (unset)
  LD_LIBRARY_PATH (unset)
  LOGDIR (unset)
  PATH=/bin​:/usr/bin
  PERL_BADLANG (unset)
  SHELL=/bin/bash

@p5pRT
Copy link
Author

p5pRT commented Jul 1, 2018

From @jkeenan

On Sun, 01 Jul 2018 02​:08​:43 GMT, mtmiller@​debian.org wrote​:

This is a bug report for perl from mtmiller@​debian.org,
generated with the help of perlbug 1.41 running under perl 5.28.0.

-----------------------------------------------------------------
The in-place editing feature in perl 5.28 dies with an error if called
with 1020 or more files on the command line. This is a regression from
5.26, which was able to operate on any number of files on its command
line.

By way of example​:

$ for n in $(seq 1 1019); do mktemp XXXXXX > /dev/null; done
$ ls | wc -l
1019
$ perl -pi -e s/a/b/ *
$ rm *
$ for n in $(seq 1 1020); do mktemp XXXXXX > /dev/null; done
$ ls | wc -l
1020
$ perl -pi -e s/a/b/ *
Cannot complete in-place edit of ZwuzEb​: failed to rename work file
'XXVtDC4r' to 'ZwuzEb'​: Bad file descriptor.

With the following modified program, one can look into the /proc
filesystem and see that by the end of the program, the perl process is
holding 1019 open file descriptors to the working directory.

$ perl -pi -e 's/a/b/; BEGIN {print "$$\n"} END {sleep 60}' *

It looks like perl is doing an opendir on the parent directory of each
file edited with the -i option, but not closing that descriptor before
moving on to the next file in the argument list.

I have verified that this is related to the "safer in-place editing"
feature from #127663. At commit 3ff4feb perl is able to operate
on
a list of 100,000 files, and at commit 9c6681c it dies on a list
of
1020 files.

I have confirmed all of the above. A very thorough bug report!

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jul 1, 2018

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @tonycoz

On Sat, Jun 30, 2018 at 07​:08​:44PM -0700, (via RT) wrote​:

The in-place editing feature in perl 5.28 dies with an error if called
with 1020 or more files on the command line. This is a regression from
5.26, which was able to operate on any number of files on its command
line.

By way of example​:

$ for n in $\(seq 1 1019\); do mktemp XXXXXX > /dev/null; done
$ ls | wc \-l
1019
$ perl \-pi \-e s/a/b/ \*
$ rm \*
$ for n in $\(seq 1 1020\); do mktemp XXXXXX > /dev/null; done
$ ls | wc \-l
1020
$ perl \-pi \-e s/a/b/ \*
Cannot complete in\-place edit of ZwuzEb​: failed to rename work file
'XXVtDC4r' to 'ZwuzEb'​: Bad file descriptor\.

With the following modified program, one can look into the /proc
filesystem and see that by the end of the program, the perl process is
holding 1019 open file descriptors to the working directory.

$ perl \-pi \-e 's/a/b/; BEGIN \{print "$$\\n"\} END \{sleep 60\}' \*

It looks like perl is doing an opendir on the parent directory of each
file edited with the -i option, but not closing that descriptor before
moving on to the next file in the argument list.

I have verified that this is related to the "safer in-place editing"
feature from #127663. At commit 3ff4feb perl is able to operate on
a list of 100,000 files, and at commit 9c6681c it dies on a list of
1020 files.

Please try the attached patch.

Tony

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @tonycoz

0001-perl-133314-always-close-the-directory-handle-on-cle.patch
From 59c3aff6023b4470506a1a58ff47d7491c758925 Mon Sep 17 00:00:00 2001
From: Tony Cook <tony@develop-help.com>
Date: Mon, 2 Jul 2018 10:43:19 +1000
Subject: (perl #133314) always close the directory handle on clean up

Previously the directory handle was only closed if the rest of the
magic free clean up is done, but in most success cases that code
doesn't run, leaking the directory handle.

So always close the directory if our AV is available.
---
 doio.c | 56 +++++++++++++++++++++++++++++++-------------------------
 1 file changed, 31 insertions(+), 25 deletions(-)

diff --git a/doio.c b/doio.c
index 4b8923f77c..16daf9fd11 100644
--- a/doio.c
+++ b/doio.c
@@ -1163,44 +1163,50 @@ S_argvout_free(pTHX_ SV *io, MAGIC *mg) {
 
     /* mg_obj can be NULL if a thread is created with the handle open, in which
      case we leave any clean up to the parent thread */
-    if (mg->mg_obj && IoIFP(io)) {
-        SV **pid_psv;
+    if (mg->mg_obj) {
 #ifdef ARGV_USE_ATFUNCTIONS
         SV **dir_psv;
         DIR *dir;
+
+        dir_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_ORIG_DIRP, FALSE);
+        assert(dir_psv && *dir_psv && SvIOK(*dir_psv));
+        dir = INT2PTR(DIR *, SvIV(*dir_psv));
 #endif
-        PerlIO *iop = IoIFP(io);
+        if (IoIFP(io)) {
+            SV **pid_psv;
+            PerlIO *iop = IoIFP(io);
 
-        assert(SvTYPE(mg->mg_obj) == SVt_PVAV);
+            assert(SvTYPE(mg->mg_obj) == SVt_PVAV);
 
-        pid_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_ORIG_PID, FALSE);
+            pid_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_ORIG_PID, FALSE);
 
-        assert(pid_psv && *pid_psv);
+            assert(pid_psv && *pid_psv);
 
-        if (SvIV(*pid_psv) == (IV)PerlProc_getpid()) {
-            /* if we get here the file hasn't been closed explicitly by the
-               user and hadn't been closed implicitly by nextargv(), so
-               abandon the edit */
-            SV **temp_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_TEMP_NAME, FALSE);
-            const char *temp_pv = SvPVX(*temp_psv);
+            if (SvIV(*pid_psv) == (IV)PerlProc_getpid()) {
+                /* if we get here the file hasn't been closed explicitly by the
+                   user and hadn't been closed implicitly by nextargv(), so
+                   abandon the edit */
+                SV **temp_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_TEMP_NAME, FALSE);
+                const char *temp_pv = SvPVX(*temp_psv);
 
-            assert(temp_psv && *temp_psv && SvPOK(*temp_psv));
-            (void)PerlIO_close(iop);
-            IoIFP(io) = IoOFP(io) = NULL;
+                assert(temp_psv && *temp_psv && SvPOK(*temp_psv));
+                (void)PerlIO_close(iop);
+                IoIFP(io) = IoOFP(io) = NULL;
 #ifdef ARGV_USE_ATFUNCTIONS
-            dir_psv = av_fetch((AV*)mg->mg_obj, ARGVMG_ORIG_DIRP, FALSE);
-            assert(dir_psv && *dir_psv && SvIOK(*dir_psv));
-            dir = INT2PTR(DIR *, SvIV(*dir_psv));
-            if (dir) {
-                if (unlinkat(my_dirfd(dir), temp_pv, 0) < 0 &&
-                    NotSupported(errno))
-                    (void)UNLINK(temp_pv);
-                closedir(dir);
-            }
+                if (dir) {
+                    if (unlinkat(my_dirfd(dir), temp_pv, 0) < 0 &&
+                        NotSupported(errno))
+                        (void)UNLINK(temp_pv);
+                }
 #else
-            (void)UNLINK(temp_pv);
+                (void)UNLINK(temp_pv);
 #endif
+            }
         }
+#ifdef ARGV_USE_ATFUNCTIONS
+        if (dir)
+            closedir(dir);
+#endif
     }
 
     return 0;
-- 
2.11.0

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @tonycoz

On Mon, Jul 02, 2018 at 11​:04​:41AM +1000, Tony Cook wrote​:

Please try the attached patch.

Tony

From 59c3aff6023b4470506a1a58ff47d7491c758925 Mon Sep 17 00​:00​:00 2001
From​: Tony Cook <tony@​develop-help.com>

You might need to edit the leading > that some mail system along the
way added.

Tony

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @karenetheridge

Since this is a regression, this fix should be included in the 5.28.1 maint release.

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @jkeenan

On Mon, 02 Jul 2018 01​:15​:56 GMT, tonyc wrote​:

On Mon, Jul 02, 2018 at 11​:04​:41AM +1000, Tony Cook wrote​:

Please try the attached patch.

Tony

From 59c3aff6023b4470506a1a58ff47d7491c758925 Mon Sep 17 00​:00​:00 2001
From​: Tony Cook <tony@​develop-help.com>

You might need to edit the leading > that some mail system along the
way added.

Tony

Patch worked for me. Please find attached a first attempt at a test for this behavior.

Thank you very much.
--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @jkeenan

133314-0002-Test-correction-to-safe-in-place-editing.patch
From d78516390da9389e32b2ca9330b8fc92c84eb59c Mon Sep 17 00:00:00 2001
From: James E Keenan <jkeenan@cpan.org>
Date: Sun, 1 Jul 2018 22:25:59 -0400
Subject: [PATCH] Test correction to safe in-place editing.

For: RT 133314
---
 t/run/switches.t | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/t/run/switches.t b/t/run/switches.t
index 7ccef1e..c87d016 100644
--- a/t/run/switches.t
+++ b/t/run/switches.t
@@ -12,7 +12,7 @@ BEGIN {
 
 BEGIN { require "./test.pl";  require "./loc_tools.pl"; }
 
-plan(tests => 137);
+plan(tests => 138);
 
 use Config;
 
@@ -699,3 +699,23 @@ SWTEST
     );
     like( $r, qr/ok/, 'Spaces on the #! line (#30660)' );
 }
+
+# RT #133314
+
+{
+    my $count = 1020;
+    my $fcount = sprintf("%04d" => $count);
+    print STDOUT "XXX: ", (join '|' => $count, $fcount), "\n";
+    for (my $i=1; $i<=$count; $i++) {
+	    my $tfile = (-d "t" ? "t/" : ""). "switches.133314.$$.$fcount.tmp";
+        open my $f, ">$tfile" or skip("Can't write temp file $tfile: $!");
+        close $f;
+        push @tmpfiles, $tfile;
+    }
+    $r = runperl(
+        switches => [ '-pi' ],
+        prog => 's/a/b/',
+        args  => [ "*switches.133314.*" ],
+    ),
+    is( $r, '', "RT \#133314: Safe in-place editing can handle at least $count files");
+}
-- 
2.7.4

@p5pRT
Copy link
Author

p5pRT commented Jul 2, 2018

From @ilmari

"James E Keenan via RT" <perlbug-followup@​perl.org> writes​:

On Mon, 02 Jul 2018 01​:15​:56 GMT, tonyc wrote​:

On Mon, Jul 02, 2018 at 11​:04​:41AM +1000, Tony Cook wrote​:

Please try the attached patch.

Tony

From 59c3aff6023b4470506a1a58ff47d7491c758925 Mon Sep 17 00​:00​:00 2001
From​: Tony Cook <tony@​develop-help.com>

You might need to edit the leading > that some mail system along the
way added.

Tony

Patch worked for me. Please find attached a first attempt at a test for this behavior.
[...]
+ my $count = 1020;

The exact value at which it fails will depend on the number of otherwise
open files and the per-process limit for open files (ulimit -n). This
defaults to 1024 on Linux, but can vary, so this is a very fragile test.

- ilmari
--
"I use RMS as a guide in the same way that a boat captain would use
a lighthouse. It's good to know where it is, but you generally
don't want to find yourself in the same spot." - Tollef Fog Heen

@p5pRT
Copy link
Author

p5pRT commented Jul 3, 2018

From mtmiller@debian.org

The patch works for me, thank you very much for the quick fix.

On Mon, Jul 02, 2018 at 03​:41​:16 -0700, Dagfinn Ilmari Mannsåker wrote​:

The exact value at which it fails will depend on the number of otherwise
open files and the per-process limit for open files (ulimit -n). This
defaults to 1024 on Linux, but can vary, so this is a very fragile test.

Agree with that. Maybe a more robust test would be for Perl to inspect
its own open file descriptors at the end of the program? I'm not much of
a Perl hacker, but maybe a better version of something like this?

  END { open my $f, ">/dev/null"; fileno($f) < 10 or die; close($f); }

--
mike

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2018

From @tonycoz

On Mon, 02 Jul 2018 21​:08​:37 -0700, mtmiller@​debian.org wrote​:

The patch works for me, thank you very much for the quick fix.

On Mon, Jul 02, 2018 at 03​:41​:16 -0700, Dagfinn Ilmari Mannsåker wrote​:

The exact value at which it fails will depend on the number of otherwise
open files and the per-process limit for open files (ulimit -n). This
defaults to 1024 on Linux, but can vary, so this is a very fragile test.

Agree with that. Maybe a more robust test would be for Perl to inspect
its own open file descriptors at the end of the program? I'm not much of
a Perl hacker, but maybe a better version of something like this?

END \{ open my $f\, ">/dev/null"; fileno\($f\) \< 10 or die; close\($f\); \}

It turned out not to be quite this simple, but it's a good approach.

Applied a new test in 028f02e and the original fix as 3d5e9c1.

At this point I don't think the test should be backported, since it may contain assumptions that break on some platform.

Tony

@p5pRT
Copy link
Author

p5pRT commented Apr 12, 2019

From @steve-m-hay

This was fixed in blead by commit 3d5e9c1, and back-ported to 5.28.1 by commit e0eae03. Awaiting release in 5.30.0.

@p5pRT
Copy link
Author

p5pRT commented Apr 12, 2019

@steve-m-hay - Status changed from 'open' to 'pending release'

@p5pRT
Copy link
Author

p5pRT commented May 22, 2019

From @khwilliamson

Thank you for filing this report. You have helped make Perl better.

With the release today of Perl 5.30.0, this and 160 other issues have been
resolved.

Perl 5.30.0 may be downloaded via​:
https://metacpan.org/release/XSAWYERX/perl-5.30.0

If you find that the problem persists, feel free to reopen this ticket.

@p5pRT
Copy link
Author

p5pRT commented May 22, 2019

@khwilliamson - Status changed from 'pending release' to 'resolved'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant