Skip Menu |
Report information
Id: 127854
Status: open
Priority: 0/
Queue: perl6

Owner: Nobody
Requestors: james.neko [at] gmail.com
Cc:
AdminCc:

Severity: (no value)
Tag: (no value)
Platform: (no value)
Patch Status: (no value)
VM: (no value)



To: rakudobug [...] perl.org
From: James Clark <james.neko [...] gmail.com>
Date: Fri, 8 Apr 2016 02:57:46 +1000
Subject: [BUG] IO::Handle.read() won't return buffer sizes >= 100_000_000
Download (untitled) / with headers
text/plain 2.3k

$ perl6 --version
This is Rakudo version 2016.03-98-g61d231c built on MoarVM version 2016.03-84-g4afd7b6
implementing Perl 6.c.

I recently attempted to grab a largeish (128MiB) Buf of pseudorandom data from /dev/urandom, only to errors from read(). I found this surprising, since the IO::Handle.read() documentation doesn't specify any kind of limits.

At first I started thinking, aha, it's calling C's read() and that's limited to SSIZE_MAX or something. And if I were programming in C, it would not surprise me that I can't ask for arbitrarily-large buffers. I assumed Perl's implementation would just call the lower-level read() repeatedly and then give me back a nice large Buf object. I've checked SSIZE_MAX on my system though, and that's not quite it. After a binary search, I discovered that the limit is 100_000_000 bytes. I don't know what's special about that number.

Code to demonstrate:-

$ perl6
To exit type 'exit' or '^D'
> my $fh = open("/dev/zero", :r, :bin);
IO::Handle<"/dev/zero".IO>(opened, at octet 0)
> my $buf = $fh.read(100_000_000);
Out of range: attempted to read 100000000 bytes from filehandle

Fun extra maybe-bug that has me questioning my sanity:-

Obviously, $fh.read(99_999_999); does not produce the error message, and in my original quest to find the limit managed to return a value in less than 30 seconds or so. But, being the responsible bug-reporter that I am, I made sure to upgrade my version of rakudo by running 'rakudobrew build moar', and checking to see if the behaviour was present on the latest version. Except... attempting to read the large-but-still-valid buffer seems to take forever now.

It's certainly testing my patience, and it's late and I'd rather submit this report now and figure out just how many hours it's taking later. I've built moar-2016.{02,01.1} and they also appear to be taking their sweet time in returning this ~100MB buffer. The thing is... I can't reproduce the "fast" experience I was getting previously. I'm pretty sure my perl6 was reporting itself as moar-2015.12, the Christmas release, and yet if I check that out specifically it still takes (figuratively) forever.

Anyway, if it's unreasonable to ask for such a large value, it's a documentation bug; otherwise, perhaps Perl6 needs to do some magic behind the scenes. It's certainly busy doing *something*.
Thanks.
-James

Still present in Rakudo version 2016.06-154-g55c359e built on MoarVM version 2016.06-9-g8fc21d5
RT-Send-CC: perl6-compiler [...] perl.org
Download (untitled) / with headers
text/plain 2.7k
On Thu, 07 Apr 2016 09:57:57 -0700, james.neko@gmail.com wrote: Show quoted text
> $ perl6 --version > This is Rakudo version 2016.03-98-g61d231c built on MoarVM version > 2016.03-84-g4afd7b6 > implementing Perl 6.c. > > I recently attempted to grab a largeish (128MiB) Buf of pseudorandom data > from /dev/urandom, only to errors from read(). I found this surprising, > since the IO::Handle.read() documentation doesn't specify any kind of > limits. > > At first I started thinking, aha, it's calling C's read() and that's > limited to SSIZE_MAX or something. And if I were programming in C, it would > not surprise me that I can't ask for arbitrarily-large buffers. I assumed > Perl's implementation would just call the lower-level read() repeatedly and > then give me back a nice large Buf object. I've checked SSIZE_MAX on my > system though, and that's not quite it. After a binary search, I discovered > that the limit is 100_000_000 bytes. I don't know what's special about that > number. > > Code to demonstrate:- > > $ perl6 > To exit type 'exit' or '^D'
> > my $fh = open("/dev/zero", :r, :bin);
> IO::Handle<"/dev/zero".IO>(opened, at octet 0)
> > my $buf = $fh.read(100_000_000);
> Out of range: attempted to read 100000000 bytes from filehandle > > Fun extra maybe-bug that has me questioning my sanity:- > > Obviously, $fh.read(99_999_999); does not produce the error message, and in > my original quest to find the limit managed to return a value in less than > 30 seconds or so. But, being the responsible bug-reporter that I am, I made > sure to upgrade my version of rakudo by running 'rakudobrew build moar', > and checking to see if the behaviour was present on the latest version. > Except... attempting to read the large-but-still-valid buffer seems to take > forever now. > > It's certainly testing my patience, and it's late and I'd rather submit > this report now and figure out just how many hours it's taking later. I've > built moar-2016.{02,01.1} and they also appear to be taking their sweet > time in returning this ~100MB buffer. The thing is... I can't reproduce the > "fast" experience I was getting previously. I'm pretty sure my perl6 was > reporting itself as moar-2015.12, the Christmas release, and yet if I check > that out specifically it still takes (figuratively) forever. > > Anyway, if it's unreasonable to ask for such a large value, it's a > documentation bug; otherwise, perhaps Perl6 needs to do some magic behind > the scenes. It's certainly busy doing *something*. > Thanks. > -James
The limit was artificial and got removed in https://github.com/rakudo/rakudo/commit/756877e Not going to add a test to regular stresstest, but wouldn't hurt to add one to the "dangerous/exotic" category of tests we were discussing awhile back.


This service is sponsored and maintained by Best Practical Solutions and runs on Perl.org infrastructure.

For issues related to this RT instance (aka "perlbug"), please contact perlbug-admin at perl.org