Skip Menu |
Report information
Id: 127682
Status: open
Priority: 0/
Queue: perl6

Owner: Nobody
Requestors: lloyd.fourn [at] gmail.com
Cc:
AdminCc:

Severity: (no value)
Tag: (no value)
Platform: (no value)
Patch Status: (no value)
VM: (no value)



Subject: [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever
To: "rakudobug [...] perl.org" <rakudobug [...] perl.org>
From: Lloyd Fournier <lloyd.fourn [...] gmail.com>
Date: Wed, 09 Mar 2016 10:39:06 +0000
Download (untitled) / with headers
text/plain 450b

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 8193);|,:out,:err);
               say $proc.out.slurp-rest' #hangs forever

If you swap $*ERR with $*OUT and $proc.out with $proc.err the same thing happens. I dunno whether it's a problem with the process reading or the process writing.

I made RT #127681 ( which is the same thing and can be closed ) today. But now that I have golfed it to this I felt it deserved its own ticket. 


RT-Send-CC: perl6-compiler [...] perl.org
Download (untitled) / with headers
text/plain 103b
FWIW that hangs on FreeBSD as well (maybe not too much a surprise, given the relationship of the OSes).
Download (untitled) / with headers
text/plain 506b
On Fri, 10 Feb 2017 23:48:54 -0800, bartolin@gmx.de wrote: Show quoted text
> FWIW that hangs on FreeBSD as well (maybe not too much a surprise, > given the relationship of the OSes).
This still hangs on MoarVM, but works on JVM (I didn't check the behaviour on JVM last year): $ ./perl6-j -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 8193);|,:out,:err); say $proc.out.slurp-rest; say "alive"' alive $ ./perl6-j --version This is Rakudo version 2018.02.1-124-g8d954027f built on JVM implementing Perl 6.c.
Download (untitled) / with headers
text/plain 560b
On Fri, 10 Feb 2017 23:48:54 -0800, bartolin@gmx.de wrote: Show quoted text
> FWIW that hangs on FreeBSD as well (maybe not too much a surprise, > given the relationship of the OSes).
Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my machine: $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 224001);|,:out,:err); say $proc.out.slurp' ## hangs ^C $ perl6 --version This is Rakudo Star version 2017.10 built on MoarVM version 2017.10 implementing Perl 6.c. $ uname -a Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
To: perl6-compiler [...] perl.org
Date: Thu, 8 Mar 2018 00:42:02 +0100
Subject: Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever
From: Timo Paulssen <timo [...] wakelift.de>
Download (untitled) / with headers
text/plain 1023b
This is a well-known problem in IPC. If you don't do it async, you risk the buffer you're not currently reading from filling up completely. Now your client program is trying to write to stderr, but can't because it's full. Your parent program is hoping to read from stdin, but nothing is arriving, and it never reads from stderr, so it's a deadlock. Wouldn't call this a rakudo bug. On 07/03/18 23:04, Christian Bartolomaeus via RT wrote: Show quoted text
> On Fri, 10 Feb 2017 23:48:54 -0800, bartolin@gmx.de wrote:
>> FWIW that hangs on FreeBSD as well (maybe not too much a surprise, >> given the relationship of the OSes).
> Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my machine: > > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 224001);|,:out,:err); say $proc.out.slurp' ## hangs > ^C > $ perl6 --version > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10 > implementing Perl 6.c. > $ uname -a > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
Date: Wed, 7 Mar 2018 22:50:50 -0500
Subject: Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever
From: Brandon Allbery <allbery.b [...] gmail.com>
CC: perl6-compiler <perl6-compiler [...] perl.org>
To: Timo Paulssen <timo [...] wakelift.de>
Download (untitled) / with headers
text/plain 1.7k
And in the cases where it "works", the buffer is larger. Which runs the risk of consuming all available memory in the worst case, if someone tries to "make it work" with an expanding buffer. The fundamental deadlock between processes blocked on I/O is not solved by buffering. Something needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen <timo@wakelift.de> wrote:
Show quoted text
This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.


On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> On Fri, 10 Feb 2017 23:48:54 -0800, bartolin@gmx.de wrote:
>> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
>> given the relationship of the OSes).
> Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my machine:
>
> $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> ^C
> $ perl6 --version
> This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> implementing Perl 6.c.
> $ uname -a
> Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux



--
brandon s allbery kf8nh                               sine nomine associates
allbery.b@gmail.com                                  ballbery@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
Date: Thu, 08 Mar 2018 06:49:42 +0000
From: Lloyd Fournier <lloyd.fourn [...] gmail.com>
Subject: Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever
To: perl6-bugs-followup [...] perl.org
When I filed this ticket I kinda expected that somehow rakudo or libuv would handle this for me under the hood. But what Timo and Brandon say makes sense. The process is still running when you slurp-rest. slurp-rest neds EOF before it stops blocking. It will never get it because the writing process is keeping itself alive until it can finish writing to ERR. But it will never finish because it's still needs to write the 8193rd byte.

Consider:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); say $proc.out.get' 
win

Using .get instead of slurp-rest works fine. This suggested to me that waiting for the process to finish before .slurp-rest would work. And it did

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); $proc.exitcode; say $proc.out.slurp-rest' 
win

But for some reason, just sleeping didn't:

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); sleep 1; say $proc.out.slurp-rest'  # hangs forever

I'd say this is closable. The solution is to wait for the process to exit before reading or to use Proc::Async.

Thanks!


On Thu, Mar 8, 2018 at 2:51 PM Brandon Allbery via RT <perl6-bugs-followup@perl.org> wrote:
Show quoted text
And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen <timo@wakelift.de> wrote:

> This is a well-known problem in IPC. If you don't do it async, you risk
> the buffer you're not currently reading from filling up completely. Now
> your client program is trying to write to stderr, but can't because it's
> full. Your parent program is hoping to read from stdin, but nothing is
> arriving, and it never reads from stderr, so it's a deadlock.
>
> Wouldn't call this a rakudo bug.
>
>
> On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > On Fri, 10 Feb 2017 23:48:54 -0800, bartolin@gmx.de wrote:
> >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> >> given the relationship of the OSes).
> > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> my machine:
> >
> > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > ^C
> > $ perl6 --version
> > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > implementing Perl 6.c.
> > $ uname -a
> > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
>



--
brandon s allbery kf8nh                               sine nomine associates
allbery.b@gmail.com                                  ballbery@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net



This service is sponsored and maintained by Best Practical Solutions and runs on Perl.org infrastructure.

For issues related to this RT instance (aka "perlbug"), please contact perlbug-admin at perl.org