Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

writing more than 8192 bytes to IO::Handle causes it to hang forever #5169

Open
p6rt opened this issue Mar 9, 2016 · 9 comments
Open

writing more than 8192 bytes to IO::Handle causes it to hang forever #5169

p6rt opened this issue Mar 9, 2016 · 9 comments
Labels

Comments

@p6rt
Copy link

p6rt commented Mar 9, 2016

Migrated from rt.perl.org#127682 (status was 'open')

Searchable as RT127682$

@p6rt
Copy link
Author

p6rt commented Mar 9, 2016

From @LLFourn

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
8193);|,​:out,​:err);
  say $proc.out.slurp-rest' #hangs forever

If you swap $*ERR with $*OUT and $proc.out with $proc.err the same thing
happens. I dunno whether it's a problem with the process reading or the
process writing.

I made RT #​127681 ( which is the same thing and can be closed ) today. But
now that I have golfed it to this I felt it deserved its own ticket.

@p6rt
Copy link
Author

p6rt commented Feb 11, 2017

From @usev6

FWIW that hangs on FreeBSD as well (maybe not too much a surprise, given the relationship of the OSes).

1 similar comment
@p6rt
Copy link
Author

p6rt commented Feb 11, 2017

From @usev6

FWIW that hangs on FreeBSD as well (maybe not too much a surprise, given the relationship of the OSes).

@p6rt
Copy link
Author

p6rt commented Mar 7, 2018

From @usev6

On Fri, 10 Feb 2017 23​:48​:54 -0800, bartolin@​gmx.de wrote​:

FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
given the relationship of the OSes).

This still hangs on MoarVM, but works on JVM (I didn't check the behaviour on JVM last year)​:

$ ./perl6-j -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 8193);|,​:out,​:err); say $proc.out.slurp-rest; say "alive"'

alive
$ ./perl6-j --version
This is Rakudo version 2018.02.1-124-g8d954027f built on JVM
implementing Perl 6.c.

@p6rt
Copy link
Author

p6rt commented Mar 7, 2018

The RT System itself - Status changed from 'new' to 'open'

@p6rt
Copy link
Author

p6rt commented Mar 7, 2018

From @usev6

On Fri, 10 Feb 2017 23​:48​:54 -0800, bartolin@​gmx.de wrote​:

FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
given the relationship of the OSes).

Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my machine​:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 224001);|,​:out,​:err); say $proc.out.slurp' ## hangs
^C
$ perl6 --version
This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
implementing Perl 6.c.
$ uname -a
Linux p6 3.2.0-4-amd64 #​1 SMP Debian 3.2.96-2 x86_64 GNU/Linux

@p6rt
Copy link
Author

p6rt commented Mar 7, 2018

From @timo

This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.

On 07/03/18 23​:04, Christian Bartolomaeus via RT wrote​:

On Fri, 10 Feb 2017 23​:48​:54 -0800, bartolin@​gmx.de wrote​:

FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
given the relationship of the OSes).
Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my machine​:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 224001);|,​:out,​:err); say $proc.out.slurp' ## hangs
^C
$ perl6 --version
This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
implementing Perl 6.c.
$ uname -a
Linux p6 3.2.0-4-amd64 #​1 SMP Debian 3.2.96-2 x86_64 GNU/Linux

@p6rt
Copy link
Author

p6rt commented Mar 8, 2018

From @geekosaur

And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6​:42 PM, Timo Paulssen <timo@​wakelift.de> wrote​:

This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.

On 07/03/18 23​:04, Christian Bartolomaeus via RT wrote​:

On Fri, 10 Feb 2017 23​:48​:54 -0800, bartolin@​gmx.de wrote​:

FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
given the relationship of the OSes).
Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
my machine​:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
224001);|,​:out,​:err); say $proc.out.slurp' ## hangs
^C
$ perl6 --version
This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
implementing Perl 6.c.
$ uname -a
Linux p6 3.2.0-4-amd64 #​1 SMP Debian 3.2.96-2 x86_64 GNU/Linux

--
brandon s allbery kf8nh sine nomine associates
allbery.b@​gmail.com ballbery@​sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

@p6rt
Copy link
Author

p6rt commented Mar 8, 2018

From @LLFourn

When I filed this ticket I kinda expected that somehow rakudo or libuv
would handle this for me under the hood. But what Timo and Brandon say
makes sense. The process is still running when you slurp-rest. slurp-rest
neds EOF before it stops blocking. It will never get it because the writing
process is keeping itself alive until it can finish writing to ERR. But it
will never finish because it's still needs to write the 8193rd byte.

Consider​:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,​:out,​:err); say $proc.out.get'
win

Using .get instead of slurp-rest works fine. This suggested to me that
waiting for the process to finish before .slurp-rest would work. And it did

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,​:out,​:err); $proc.exitcode; say
$proc.out.slurp-rest'
win

But for some reason, just sleeping didn't​:

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,​:out,​:err); sleep 1; say $proc.out.slurp-rest' #
hangs forever

I'd say this is closable. The solution is to wait for the process to exit
before reading or to use Proc​::Async.

Thanks!

On Thu, Mar 8, 2018 at 2​:51 PM Brandon Allbery via RT <
perl6-bugs-followup@​perl.org> wrote​:

And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6​:42 PM, Timo Paulssen <timo@​wakelift.de> wrote​:

This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.

On 07/03/18 23​:04, Christian Bartolomaeus via RT wrote​:

On Fri, 10 Feb 2017 23​:48​:54 -0800, bartolin@​gmx.de wrote​:

FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
given the relationship of the OSes).
Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
my machine​:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
224001);|,​:out,​:err); say $proc.out.slurp' ## hangs
^C
$ perl6 --version
This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
implementing Perl 6.c.
$ uname -a
Linux p6 3.2.0-4-amd64 #​1 SMP Debian 3.2.96-2 x86_64 GNU/Linux

--
brandon s allbery kf8nh sine nomine
associates
allbery.b@​gmail.com
ballbery@​sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad
http://sinenomine.net

@p6rt p6rt added the osx label Jan 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant