Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfault while measuring memory use #1916

Closed
p6rt opened this issue Jul 8, 2010 · 7 comments
Closed

Segfault while measuring memory use #1916

p6rt opened this issue Jul 8, 2010 · 7 comments

Comments

@p6rt
Copy link

p6rt commented Jul 8, 2010

Migrated from rt.perl.org#76416 (status was 'resolved')

Searchable as RT76416$

@p6rt
Copy link
Author

p6rt commented Jul 8, 2010

From @ajs

This morning, I was attempting to measure how much memory is used by a hash,
and discovered that Rakudo appeared to be leaking memory like a sieve even
when not adding to my hash. In fact, after 55,000 loop iterations on a loop
that simply reads /proc/PID/maps every 1000 steps, the Rakudo process
segfaulted.

I replicated the segfault with a trivial one-liner​:

$ ./perl6 -e 'for 1..* -> $i { 1; }'

But you can run my original if you prefer​:

#!./perl6

my %storage;
my $origsize = heapsize();
for 1..* -> $i {
  #%storage{~$i} = 1;
  if $i %% 1000 {
  my $growth = heapsize() - $origsize;
  say "At step $i, grown $growth bytes";
  }
}

sub heapsize() {
  my $f = open("/proc/{~$*PID}/maps", :r) or die "Cannot open PID/maps​:
$!";
  for $f.lines -> $line {
  if $line ~~ /(<xdigit>+)\-(<xdigit>+).*\[heap\]/ {
  return :16($1)-​:16($0);
  }
  }
}

Here's the output​:

$ ./testsize.p6
At step 1000, grown 43077632 bytes
At step 2000, grown 67371008 bytes
At step 3000, grown 93110272 bytes
At step 4000, grown 114012160 bytes
At step 5000, grown 134651904 bytes
At step 6000, grown 151719936 bytes
At step 7000, grown 167657472 bytes
At step 8000, grown 183394304 bytes
At step 9000, grown 196222976 bytes
At step 10000, grown 209420288 bytes
At step 11000, grown 222498816 bytes
At step 12000, grown 233148416 bytes
At step 13000, grown 246837248 bytes
At step 14000, grown 260005888 bytes
At step 15000, grown 270815232 bytes
At step 16000, grown 281759744 bytes
At step 17000, grown 292675584 bytes
At step 18000, grown 303771648 bytes
At step 19000, grown 314499072 bytes
At step 20000, grown 318832640 bytes
At step 21000, grown 331616256 bytes
At step 22000, grown 342315008 bytes
At step 23000, grown 348426240 bytes
At step 24000, grown 357629952 bytes
At step 25000, grown 370954240 bytes
At step 26000, grown 374575104 bytes
At step 27000, grown 387641344 bytes
At step 28000, grown 391786496 bytes
At step 29000, grown 402497536 bytes
At step 30000, grown 409833472 bytes
At step 31000, grown 418832384 bytes
At step 32000, grown 425979904 bytes
At step 33000, grown 439025664 bytes
At step 34000, grown 444063744 bytes
At step 35000, grown 456105984 bytes
At step 36000, grown 460099584 bytes
At step 37000, grown 467443712 bytes
At step 38000, grown 479121408 bytes
At step 39000, grown 483106816 bytes
At step 40000, grown 490717184 bytes
At step 41000, grown 502202368 bytes
At step 42000, grown 506105856 bytes
At step 43000, grown 513863680 bytes
At step 44000, grown 518402048 bytes
At step 45000, grown 531542016 bytes
At step 46000, grown 535396352 bytes
At step 47000, grown 542134272 bytes
At step 48000, grown 548675584 bytes
At step 49000, grown 553746432 bytes
At step 50000, grown 567021568 bytes
At step 51000, grown 570941440 bytes
At step 52000, grown 577708032 bytes
At step 53000, grown 584269824 bytes
At step 54000, grown 589402112 bytes
At step 55000, grown 596246528 bytes
Segmentation fault

A small excerpt of the stack trace should immediately highlight the issue​:

#​18010 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
  obj=0xcb69ed0) at src/gc/api.c​:181
#​18011 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark (interp=0x1fd6010,
  _self=<value optimized out>)
  from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18012 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
  obj=0xcb02f70) at src/gc/api.c​:181
#​18013 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
  obj=0xcb02f90) at src/gc/api.c​:181
#​18014 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark (interp=0x1fd6010,
  _self=<value optimized out>)
  from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18015 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
  obj=0xcac3e10) at src/gc/api.c​:181
#​18016 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark (interp=0x1fd6010,
  _self=<value optimized out>)

Notice that I began at stack frame #​18010... this is not exactly typical ;-)

When I replaced the for 1..* -> $i with a bare loop and a manually
incremented counter, I managed to get up to around 100,000 iterations with
no problems, but memory allocation by that point was around a gigabyte.

All of this is on 64 bit Intel with uname reporting​:

2.6.31-22-generic #​60-Ubuntu SMP Thu May 27 02​:41​:03 UTC 2010 x86_64
GNU/Linux

--
Aaron Sherman
Email or GTalk​: ajs@​ajs.com
http://www.ajs.com/~ajs

@p6rt
Copy link
Author

p6rt commented Jul 28, 2010

@coke - Status changed from 'new' to 'open'

@p6rt
Copy link
Author

p6rt commented Oct 7, 2011

From @coke

On Wed Jul 07 18​:04​:46 2010, ajs wrote​:

This morning, I was attempting to measure how much memory is used by a
hash,
and discovered that Rakudo appeared to be leaking memory like a sieve
even
when not adding to my hash. In fact, after 55,000 loop iterations on a
loop
that simply reads /proc/PID/maps every 1000 steps, the Rakudo process
segfaulted.

I replicated the segfault with a trivial one-liner​:

$ ./perl6 -e 'for 1..* -> $i { 1; }'

This doesn't segfault for me, but memory usage is still crazy​:

The following code​:

./perl6 -e 'for 1..* -> $i { if ! ($i %1000) { say $i} }'

outputs ~100K when memory is at ~750M
outputs ~170K when memory is at ~1000M

But you can run my original if you prefer​:

#!./perl6

my %storage;
my $origsize = heapsize();
for 1..* -> $i {
#%storage{~$i} = 1;
if $i %% 1000 {
my $growth = heapsize() - $origsize;
say "At step $i, grown $growth bytes";
}
}

sub heapsize() {
my $f = open("/proc/{~$*PID}/maps", :r) or die "Cannot open
PID/maps​:
$!";
for $f.lines -> $line {
if $line ~~ /(<xdigit>+)\-(<xdigit>+).*\[heap\]/ {
return :16($1)-​:16($0);
}
}
}

Here's the output​:

$ ./testsize.p6
At step 1000, grown 43077632 bytes
At step 2000, grown 67371008 bytes
At step 3000, grown 93110272 bytes
At step 4000, grown 114012160 bytes
At step 5000, grown 134651904 bytes
At step 6000, grown 151719936 bytes
At step 7000, grown 167657472 bytes
At step 8000, grown 183394304 bytes
At step 9000, grown 196222976 bytes
At step 10000, grown 209420288 bytes
At step 11000, grown 222498816 bytes
At step 12000, grown 233148416 bytes
At step 13000, grown 246837248 bytes
At step 14000, grown 260005888 bytes
At step 15000, grown 270815232 bytes
At step 16000, grown 281759744 bytes
At step 17000, grown 292675584 bytes
At step 18000, grown 303771648 bytes
At step 19000, grown 314499072 bytes
At step 20000, grown 318832640 bytes
At step 21000, grown 331616256 bytes
At step 22000, grown 342315008 bytes
At step 23000, grown 348426240 bytes
At step 24000, grown 357629952 bytes
At step 25000, grown 370954240 bytes
At step 26000, grown 374575104 bytes
At step 27000, grown 387641344 bytes
At step 28000, grown 391786496 bytes
At step 29000, grown 402497536 bytes
At step 30000, grown 409833472 bytes
At step 31000, grown 418832384 bytes
At step 32000, grown 425979904 bytes
At step 33000, grown 439025664 bytes
At step 34000, grown 444063744 bytes
At step 35000, grown 456105984 bytes
At step 36000, grown 460099584 bytes
At step 37000, grown 467443712 bytes
At step 38000, grown 479121408 bytes
At step 39000, grown 483106816 bytes
At step 40000, grown 490717184 bytes
At step 41000, grown 502202368 bytes
At step 42000, grown 506105856 bytes
At step 43000, grown 513863680 bytes
At step 44000, grown 518402048 bytes
At step 45000, grown 531542016 bytes
At step 46000, grown 535396352 bytes
At step 47000, grown 542134272 bytes
At step 48000, grown 548675584 bytes
At step 49000, grown 553746432 bytes
At step 50000, grown 567021568 bytes
At step 51000, grown 570941440 bytes
At step 52000, grown 577708032 bytes
At step 53000, grown 584269824 bytes
At step 54000, grown 589402112 bytes
At step 55000, grown 596246528 bytes
Segmentation fault

A small excerpt of the stack trace should immediately highlight the
issue​:

#​18010 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb69ed0) at src/gc/api.c​:181
#​18011 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18012 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f70) at src/gc/api.c​:181
#​18013 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f90) at src/gc/api.c​:181
#​18014 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18015 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcac3e10) at src/gc/api.c​:181
#​18016 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)

Notice that I began at stack frame #​18010... this is not exactly
typical ;-)

When I replaced the for 1..* -> $i with a bare loop and a manually
incremented counter, I managed to get up to around 100,000 iterations
with
no problems, but memory allocation by that point was around a
gigabyte.

All of this is on 64 bit Intel with uname reporting​:

2.6.31-22-generic #​60-Ubuntu SMP Thu May 27 02​:41​:03 UTC 2010 x86_64
GNU/Linux

--
Will "Coke" Coleda

@p6rt
Copy link
Author

p6rt commented Jul 15, 2014

From @coke

On Fri Oct 07 10​:34​:34 2011, coke wrote​:

On Wed Jul 07 18​:04​:46 2010, ajs wrote​:

This morning, I was attempting to measure how much memory is used by a
hash,
and discovered that Rakudo appeared to be leaking memory like a sieve
even
when not adding to my hash. In fact, after 55,000 loop iterations on a
loop
that simply reads /proc/PID/maps every 1000 steps, the Rakudo process
segfaulted.

I replicated the segfault with a trivial one-liner​:

$ ./perl6 -e 'for 1..* -> $i { 1; }'

This doesn't segfault for me, but memory usage is still crazy​:

The following code​:

./perl6 -e 'for 1..* -> $i { if ! ($i %1000) { say $i} }'

outputs ~100K when memory is at ~750M
outputs ~170K when memory is at ~1000M

But you can run my original if you prefer​:

#!./perl6

my %storage;
my $origsize = heapsize();
for 1..* -> $i {
#%storage{~$i} = 1;
if $i %% 1000 {
my $growth = heapsize() - $origsize;
say "At step $i, grown $growth bytes";
}
}

sub heapsize() {
my $f = open("/proc/{~$*PID}/maps", :r) or die "Cannot open
PID/maps​:
$!";
for $f.lines -> $line {
if $line ~~ /(<xdigit>+)\-(<xdigit>+).*\[heap\]/ {
return :16($1)-​:16($0);
}
}
}

Here's the output​:

$ ./testsize.p6
At step 1000, grown 43077632 bytes
At step 2000, grown 67371008 bytes
At step 3000, grown 93110272 bytes
At step 4000, grown 114012160 bytes
At step 5000, grown 134651904 bytes
At step 6000, grown 151719936 bytes
At step 7000, grown 167657472 bytes
At step 8000, grown 183394304 bytes
At step 9000, grown 196222976 bytes
At step 10000, grown 209420288 bytes
At step 11000, grown 222498816 bytes
At step 12000, grown 233148416 bytes
At step 13000, grown 246837248 bytes
At step 14000, grown 260005888 bytes
At step 15000, grown 270815232 bytes
At step 16000, grown 281759744 bytes
At step 17000, grown 292675584 bytes
At step 18000, grown 303771648 bytes
At step 19000, grown 314499072 bytes
At step 20000, grown 318832640 bytes
At step 21000, grown 331616256 bytes
At step 22000, grown 342315008 bytes
At step 23000, grown 348426240 bytes
At step 24000, grown 357629952 bytes
At step 25000, grown 370954240 bytes
At step 26000, grown 374575104 bytes
At step 27000, grown 387641344 bytes
At step 28000, grown 391786496 bytes
At step 29000, grown 402497536 bytes
At step 30000, grown 409833472 bytes
At step 31000, grown 418832384 bytes
At step 32000, grown 425979904 bytes
At step 33000, grown 439025664 bytes
At step 34000, grown 444063744 bytes
At step 35000, grown 456105984 bytes
At step 36000, grown 460099584 bytes
At step 37000, grown 467443712 bytes
At step 38000, grown 479121408 bytes
At step 39000, grown 483106816 bytes
At step 40000, grown 490717184 bytes
At step 41000, grown 502202368 bytes
At step 42000, grown 506105856 bytes
At step 43000, grown 513863680 bytes
At step 44000, grown 518402048 bytes
At step 45000, grown 531542016 bytes
At step 46000, grown 535396352 bytes
At step 47000, grown 542134272 bytes
At step 48000, grown 548675584 bytes
At step 49000, grown 553746432 bytes
At step 50000, grown 567021568 bytes
At step 51000, grown 570941440 bytes
At step 52000, grown 577708032 bytes
At step 53000, grown 584269824 bytes
At step 54000, grown 589402112 bytes
At step 55000, grown 596246528 bytes
Segmentation fault

A small excerpt of the stack trace should immediately highlight the
issue​:

#​18010 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb69ed0) at src/gc/api.c​:181
#​18011 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18012 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f70) at src/gc/api.c​:181
#​18013 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f90) at src/gc/api.c​:181
#​18014 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18015 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcac3e10) at src/gc/api.c​:181
#​18016 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)

Notice that I began at stack frame #​18010... this is not exactly
typical ;-)

When I replaced the for 1..* -> $i with a bare loop and a manually
incremented counter, I managed to get up to around 100,000 iterations
with
no problems, but memory allocation by that point was around a
gigabyte.

All of this is on 64 bit Intel with uname reporting​:

2.6.31-22-generic #​60-Ubuntu SMP Thu May 27 02​:41​:03 UTC 2010 x86_64
GNU/Linux

Segfault is gone; closing ticket.

--
Will "Coke" Coleda

1 similar comment
@p6rt
Copy link
Author

p6rt commented Jul 15, 2014

From @coke

On Fri Oct 07 10​:34​:34 2011, coke wrote​:

On Wed Jul 07 18​:04​:46 2010, ajs wrote​:

This morning, I was attempting to measure how much memory is used by a
hash,
and discovered that Rakudo appeared to be leaking memory like a sieve
even
when not adding to my hash. In fact, after 55,000 loop iterations on a
loop
that simply reads /proc/PID/maps every 1000 steps, the Rakudo process
segfaulted.

I replicated the segfault with a trivial one-liner​:

$ ./perl6 -e 'for 1..* -> $i { 1; }'

This doesn't segfault for me, but memory usage is still crazy​:

The following code​:

./perl6 -e 'for 1..* -> $i { if ! ($i %1000) { say $i} }'

outputs ~100K when memory is at ~750M
outputs ~170K when memory is at ~1000M

But you can run my original if you prefer​:

#!./perl6

my %storage;
my $origsize = heapsize();
for 1..* -> $i {
#%storage{~$i} = 1;
if $i %% 1000 {
my $growth = heapsize() - $origsize;
say "At step $i, grown $growth bytes";
}
}

sub heapsize() {
my $f = open("/proc/{~$*PID}/maps", :r) or die "Cannot open
PID/maps​:
$!";
for $f.lines -> $line {
if $line ~~ /(<xdigit>+)\-(<xdigit>+).*\[heap\]/ {
return :16($1)-​:16($0);
}
}
}

Here's the output​:

$ ./testsize.p6
At step 1000, grown 43077632 bytes
At step 2000, grown 67371008 bytes
At step 3000, grown 93110272 bytes
At step 4000, grown 114012160 bytes
At step 5000, grown 134651904 bytes
At step 6000, grown 151719936 bytes
At step 7000, grown 167657472 bytes
At step 8000, grown 183394304 bytes
At step 9000, grown 196222976 bytes
At step 10000, grown 209420288 bytes
At step 11000, grown 222498816 bytes
At step 12000, grown 233148416 bytes
At step 13000, grown 246837248 bytes
At step 14000, grown 260005888 bytes
At step 15000, grown 270815232 bytes
At step 16000, grown 281759744 bytes
At step 17000, grown 292675584 bytes
At step 18000, grown 303771648 bytes
At step 19000, grown 314499072 bytes
At step 20000, grown 318832640 bytes
At step 21000, grown 331616256 bytes
At step 22000, grown 342315008 bytes
At step 23000, grown 348426240 bytes
At step 24000, grown 357629952 bytes
At step 25000, grown 370954240 bytes
At step 26000, grown 374575104 bytes
At step 27000, grown 387641344 bytes
At step 28000, grown 391786496 bytes
At step 29000, grown 402497536 bytes
At step 30000, grown 409833472 bytes
At step 31000, grown 418832384 bytes
At step 32000, grown 425979904 bytes
At step 33000, grown 439025664 bytes
At step 34000, grown 444063744 bytes
At step 35000, grown 456105984 bytes
At step 36000, grown 460099584 bytes
At step 37000, grown 467443712 bytes
At step 38000, grown 479121408 bytes
At step 39000, grown 483106816 bytes
At step 40000, grown 490717184 bytes
At step 41000, grown 502202368 bytes
At step 42000, grown 506105856 bytes
At step 43000, grown 513863680 bytes
At step 44000, grown 518402048 bytes
At step 45000, grown 531542016 bytes
At step 46000, grown 535396352 bytes
At step 47000, grown 542134272 bytes
At step 48000, grown 548675584 bytes
At step 49000, grown 553746432 bytes
At step 50000, grown 567021568 bytes
At step 51000, grown 570941440 bytes
At step 52000, grown 577708032 bytes
At step 53000, grown 584269824 bytes
At step 54000, grown 589402112 bytes
At step 55000, grown 596246528 bytes
Segmentation fault

A small excerpt of the stack trace should immediately highlight the
issue​:

#​18010 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb69ed0) at src/gc/api.c​:181
#​18011 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18012 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f70) at src/gc/api.c​:181
#​18013 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcb02f90) at src/gc/api.c​:181
#​18014 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)
from .../rakudo/parrot_install/lib/libparrot.so.2.5.0
#​18015 0x00007ffff7a5e708 in Parrot_gc_mark_PMC_alive_fun
(interp=0x1fd6010,
obj=0xcac3e10) at src/gc/api.c​:181
#​18016 0x00007ffff7af87ef in Parrot_FixedPMCArray_mark
(interp=0x1fd6010,
_self=<value optimized out>)

Notice that I began at stack frame #​18010... this is not exactly
typical ;-)

When I replaced the for 1..* -> $i with a bare loop and a manually
incremented counter, I managed to get up to around 100,000 iterations
with
no problems, but memory allocation by that point was around a
gigabyte.

All of this is on 64 bit Intel with uname reporting​:

2.6.31-22-generic #​60-Ubuntu SMP Thu May 27 02​:41​:03 UTC 2010 x86_64
GNU/Linux

Segfault is gone; closing ticket.

--
Will "Coke" Coleda

@p6rt
Copy link
Author

p6rt commented Sep 2, 2014

From @coke

Whoops, forgot to actually close ticket.

--
Will "Coke" Coleda

@p6rt
Copy link
Author

p6rt commented Sep 2, 2014

@coke - Status changed from 'open' to 'resolved'

@p6rt p6rt closed this as completed Sep 2, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant