Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hint-hash copying can leak #11822

Closed
p5pRT opened this issue Dec 25, 2011 · 23 comments
Closed

Hint-hash copying can leak #11822

p5pRT opened this issue Dec 25, 2011 · 23 comments

Comments

@p5pRT
Copy link

p5pRT commented Dec 25, 2011

Migrated from rt.perl.org#107000 (status was 'resolved')

Searchable as RT107000$

@p5pRT
Copy link
Author

p5pRT commented Dec 25, 2011

From @cpansprout

If the hint hash is tied and a tie method dies, this variable in hv_copy_hints_hv can leak​:

HV *
Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
{
  HV * const hv = newHV();

The obvious patch, below, would break namespace​::clean and B​::Hooks​::EndOfScope.

Would wrapping the contents of this function in ENTER/LEAVE work?

Inline Patch
diff --git a/hv.c b/hv.c
index 1c5e6bc..0e6ed00 100644
--- a/hv.c
+++ b/hv.c
@@ -1448,7 +1448,7 @@ added to it.  A pointer to the new hash is returned.
 HV *
 Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
 {
-    HV * const hv = newHV();
+    HV * const hv = SvREFCNT_inc_NN(sv_2mortal(newHV()));
 
     if (ohv) {
        STRLEN hv_max = HvMAX(ohv);

---
Flags:   category=core   severity=low

Site configuration information for perl 5.15.5​:

Configured by sprout at Sun Dec 18 11​:26​:14 PST 2011.

Summary of my perl5 (revision 5 version 15 subversion 5) configuration​:
  Snapshot of​: 5dca8ed
  Platform​:
  osname=darwin, osvers=10.5.0, archname=darwin-2level
  uname='darwin pint.local 10.5.0 darwin kernel version 10.5.0​: fri nov 5 23​:20​:39 pdt 2010; root​:xnu-1504.9.17~1release_i386 i386 '
  config_args='-de -Dusedevel -DDEBUGGING=-g'
  hint=recommended, useposix=true, d_sigaction=define
  useithreads=undef, usemultiplicity=undef
  useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
  use64bitint=undef, use64bitall=undef, uselongdouble=undef
  usemymalloc=n, bincompat5005=undef
  Compiler​:
  cc='cc', ccflags ='-fno-common -DPERL_DARWIN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include',
  optimize='-O3 -g',
  cppflags='-fno-common -DPERL_DARWIN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include'
  ccversion='', gccversion='4.2.1 (Apple Inc. build 5664)', gccosandvers=''
  intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
  d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
  ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
  alignbytes=8, prototype=define
  Linker and Libraries​:
  ld='env MACOSX_DEPLOYMENT_TARGET=10.3 cc', ldflags =' -fstack-protector -L/usr/local/lib'
  libpth=/usr/local/lib /usr/lib
  libs=-ldbm -ldl -lm -lutil -lc
  perllibs=-ldl -lm -lutil -lc
  libc=, so=dylib, useshrplib=false, libperl=libperl.a
  gnulibc_version=''
  Dynamic Linking​:
  dlsrc=dl_dlopen.xs, dlext=bundle, d_dlsymun=undef, ccdlflags=' '
  cccdlflags=' ', lddlflags=' -bundle -undefined dynamic_lookup -L/usr/local/lib -fstack-protector'

Locally applied patches​:
 


@​INC for perl 5.15.5​:
  /usr/local/lib/perl5/site_perl/5.15.5/darwin-2level
  /usr/local/lib/perl5/site_perl/5.15.5
  /usr/local/lib/perl5/5.15.5/darwin-2level
  /usr/local/lib/perl5/5.15.5
  /usr/local/lib/perl5/site_perl
  .


Environment for perl 5.15.5​:
  DYLD_LIBRARY_PATH (unset)
  HOME=/Users/sprout
  LANG=en_US.UTF-8
  LANGUAGE (unset)
  LD_LIBRARY_PATH (unset)
  LOGDIR (unset)
  PATH=/usr/bin​:/bin​:/usr/sbin​:/sbin​:/usr/local/bin​:/usr/X11/bin​:/usr/local/bin
  PERL_BADLANG (unset)
  SHELL=/bin/bash

@p5pRT
Copy link
Author

p5pRT commented Jan 3, 2012

From zefram@fysh.org

Father Chrysostomos wrote​:

Would wrapping the contents of this function in ENTER/LEAVE work?

Wrong stack. You'd also need some SAVETMPS/FREETMPS, to go with
mortalisation. But more practically you could use a SAVEFREESV, instead
of mortalisation, along with ENTER/LEAVE.

  HV *hv;
  ENTER;
  hv = newHV();
  SvREFCNT_inc(hv);
  SAVEFREESV(hv);
  /* ... stuff that may die ... */
  LEAVE;
  return hv;

-zefram

@p5pRT
Copy link
Author

p5pRT commented Jan 3, 2012

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented Jan 5, 2012

From @cpansprout

On Tue Jan 03 04​:23​:38 2012, zefram@​fysh.org wrote​:

Father Chrysostomos wrote​:

Would wrapping the contents of this function in ENTER/LEAVE work?

Wrong stack. You'd also need some SAVETMPS/FREETMPS, to go with
mortalisation. But more practically you could use a SAVEFREESV, instead
of mortalisation, along with ENTER/LEAVE.

HV \*hv;
ENTER;
hv = newHV\(\);
SvREFCNT\_inc\(hv\);
SAVEFREESV\(hv\);
/\* \.\.\. stuff that may die \.\.\. \*/
LEAVE;
return hv;

This little snippet (add it to t/op/svleak.t) shows that the number of
SVs goes up by four for each failed eval​:

# [perl #107000]
package hhtie {
  sub TIEHASH { bless [] }
  sub STORE { $_[0][0]{$_[1]} = $_[2] }
  sub FETCH { die if $explosive; $_[0][0]{$_[1]} }
  sub FIRSTKEY { my $a = scalar keys %{$_[0][0]}; each %{$_[0][0]} }
  sub NEXTKEY { each %{$_[0][0]} }
}

warn sv_count;
for(1..10) {
eval q`
  $hhtie​::explosive = 0;
  BEGIN {
  tie %^H, hhtie;
  $^H{foo} = bar;
  $hhtie​::explosive = 1;
  }
  { 1; }
`;
warn sv_count;
}

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something so rare
as this.

Inline Patch
diff --git a/hv.c b/hv.c
index 9c5670b..16bc536 100644
--- a/hv.c
+++ b/hv.c
@@ -1459,6 +1459,9 @@ Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
 	const I32 riter = HvRITER_get(ohv);
 	HE * const eiter = HvEITER_get(ohv);
 
+	ENTER;
+	SAVEFREESV(hv);
+
 	while (hv_max && hv_max + 1 >= hv_fill * 2)
 	    hv_max = hv_max / 2;
 	HvMAX(hv) = hv_max;
@@ -1480,6 +1483,9 @@ Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
 	}
 	HvRITER_set(ohv, riter);
 	HvEITER_set(ohv, eiter);
+
+	SvREFCNT_inc_simple_void_NN(hv);
+	LEAVE;
     }
     hv_magic(hv, NULL, PERL_MAGIC_hints);
     return hv;


-- 

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Jan 5, 2012

From [Unknown Contact. See original ticket]

On Tue Jan 03 04​:23​:38 2012, zefram@​fysh.org wrote​:

Father Chrysostomos wrote​:

Would wrapping the contents of this function in ENTER/LEAVE work?

Wrong stack. You'd also need some SAVETMPS/FREETMPS, to go with
mortalisation. But more practically you could use a SAVEFREESV, instead
of mortalisation, along with ENTER/LEAVE.

HV \*hv;
ENTER;
hv = newHV\(\);
SvREFCNT\_inc\(hv\);
SAVEFREESV\(hv\);
/\* \.\.\. stuff that may die \.\.\. \*/
LEAVE;
return hv;

This little snippet (add it to t/op/svleak.t) shows that the number of
SVs goes up by four for each failed eval​:

# [perl #107000]
package hhtie {
  sub TIEHASH { bless [] }
  sub STORE { $_[0][0]{$_[1]} = $_[2] }
  sub FETCH { die if $explosive; $_[0][0]{$_[1]} }
  sub FIRSTKEY { my $a = scalar keys %{$_[0][0]}; each %{$_[0][0]} }
  sub NEXTKEY { each %{$_[0][0]} }
}

warn sv_count;
for(1..10) {
eval q`
  $hhtie​::explosive = 0;
  BEGIN {
  tie %^H, hhtie;
  $^H{foo} = bar;
  $hhtie​::explosive = 1;
  }
  { 1; }
`;
warn sv_count;
}

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something so rare
as this.

Inline Patch
diff --git a/hv.c b/hv.c
index 9c5670b..16bc536 100644
--- a/hv.c
+++ b/hv.c
@@ -1459,6 +1459,9 @@ Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
 	const I32 riter = HvRITER_get(ohv);
 	HE * const eiter = HvEITER_get(ohv);
 
+	ENTER;
+	SAVEFREESV(hv);
+
 	while (hv_max && hv_max + 1 >= hv_fill * 2)
 	    hv_max = hv_max / 2;
 	HvMAX(hv) = hv_max;
@@ -1480,6 +1483,9 @@ Perl_hv_copy_hints_hv(pTHX_ HV *const ohv)
 	}
 	HvRITER_set(ohv, riter);
 	HvEITER_set(ohv, eiter);
+
+	SvREFCNT_inc_simple_void_NN(hv);
+	LEAVE;
     }
     hv_magic(hv, NULL, PERL_MAGIC_hints);
     return hv;


-- 

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Jan 8, 2012

From @iabyn

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something so rare
as this.

failed evals are known to leak OPs during compilation (and this is hard to
fix). They are not supposed to leak SVs, unless possibly those are SVs are
attached to OPs.

--
print+qq&$}$"$/$s$,$a$d$g$s$@​$.$q$,$​:$.$q$^$,$@​$a$$;$.$q$m&if+map{m,^\d{0\,},,${$​::{$'}}=chr($"+=$&||1)}q&10m22,42}6​:17a22.3@​3;^2dg3q/s"&=~m*\d\*.*g

@p5pRT
Copy link
Author

p5pRT commented Jan 9, 2012

From @nwc10

On Sun, Jan 08, 2012 at 04​:02​:57PM +0000, Dave Mitchell wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that???s the case, it???s probably not worthwhile plugging something so rare
as this.

failed evals are known to leak OPs during compilation (and this is hard to
fix). They are not supposed to leak SVs, unless possibly those are SVs are
attached to OPs.

I had a hunch that an approach that might work to fix this is to

a​: always use the slab allocator code for OPs
b​: use a new slab when starting each new compilation
  (and by implication, track if this compilation needs more than one slab)
c​: clean up the *slab* if the compilation fails

but (b) might increase memory use (ends of partial slabs)
(although *this* could be avoided by copying OPs during the optimiser, which
could also re-order them to execution order, and drop NULL OPs*)

and (c) might require *removing* current checks which attempt to free OPs
on failure, to avoid double frees.

It strikes me as a big project.

Nicholas Clark

* Which is also problematic, because some NULL OPs are still needed. IIRC
  certainly the ex-nextstate OPs are used to generate line numbers in
  warnings. But it might be valuable to work to eliminate those, by
  using something more space efficient than a full COP to store mere line
  number changes. (ie same package, same file)

@p5pRT
Copy link
Author

p5pRT commented Jan 9, 2012

From zefram@fysh.org

Nicholas Clark wrote​:

and (c) might require *removing* current checks which attempt to free OPs
on failure, to avoid double frees.

You'd need to cope with op freeing more generally anyway​: it's a normal
part of op tree building. Bit like cell apoptosis that forms higher-level
anatomical structures. Then you have to deal with fragmentation in
the slab.

certainly the ex-nextstate OPs are used to generate line numbers in
warnings.

No, the line numbers come from PL_curcop, which is set by pp_nextstate.
An *ex-*nextstate op would only occur where that assignment has been
determined to be unnecessary, because it's immediately followed by
another nextstate.

-zefram

@p5pRT
Copy link
Author

p5pRT commented Jan 9, 2012

From @nwc10

On Mon, Jan 09, 2012 at 05​:35​:53PM +0000, Zefram wrote​:

Nicholas Clark wrote​:

and (c) might require *removing* current checks which attempt to free OPs
on failure, to avoid double frees.

You'd need to cope with op freeing more generally anyway​: it's a normal
part of op tree building. Bit like cell apoptosis that forms higher-level
anatomical structures. Then you have to deal with fragmentation in
the slab.

Not thought of fragmentation within the slab. It's going to have the same
effect as unused slab ends, isn't it?

What I was thinking was

a​: OPs are in stabs. In some way that makes it possible to walk the slab
  and determine if each bit thinks that it's an allocated OP, or now free
  space
b​: When compilation fails (and, hmm, maybe at final tree free too)
  instead of freeing the OPs as a tree, the code instead walks the slab
  and does cleanup on every OP that thinks its live, and needs cleanup.
  Then the slab gets dropped.

That way, unchained OPs that got lost by a compilation exception (which I
think is the leak problem, isn't it? Heap allocated OPs, stored as C stack
variables, not yet linked into any tree of OPs, and then kaboom, croak(),
longjmp, perl stack unwinding)

certainly the ex-nextstate OPs are used to generate line numbers in
warnings.

No, the line numbers come from PL_curcop, which is set by pp_nextstate.
An *ex-*nextstate op would only occur where that assignment has been
determined to be unnecessary, because it's immediately followed by
another nextstate.

I may have misremembered the details, but this bit of evil is live in
S_closest_cop() in util.c​:

  /* If the OP_NEXTSTATE has been optimised away we can still use it
  * the get the file and line number. */

  if (kid->op_type == OP_NULL && kid->op_targ == OP_NEXTSTATE)
  cop = (const COP *)kid;

  /* Keep searching, and return when we've found something. */

and IIRC take it out, and tests fail.

Nicholas Clark

@p5pRT
Copy link
Author

p5pRT commented Jan 9, 2012

From @pjcj

On Mon, Jan 09, 2012 at 05​:42​:55PM +0000, Nicholas Clark wrote​:

On Mon, Jan 09, 2012 at 05​:35​:53PM +0000, Zefram wrote​:

Nicholas Clark wrote​:

certainly the ex-nextstate OPs are used to generate line numbers in
warnings.

No, the line numbers come from PL_curcop, which is set by pp_nextstate.
An *ex-*nextstate op would only occur where that assignment has been
determined to be unnecessary, because it's immediately followed by
another nextstate.

I may have misremembered the details, but this bit of evil is live in
S_closest_cop() in util.c​:

    /\* If the OP\_NEXTSTATE has been optimised away we can still use it
     \* the get the file and line number\. \*/

    if \(kid\->op\_type == OP\_NULL && kid\->op\_targ == OP\_NEXTSTATE\)
    cop = \(const COP \*\)kid;

    /\* Keep searching\, and return when we've found something\. \*/

and IIRC take it out, and tests fail.

Now I'm not sure whether to feel proud or to slink off quietly into the
background.

This particular bit of evil is a little over 11 years old and stems from
me trying to get more accurate line numbers, primarily for Devel​::Cover.
You can see it in its full glory as ae7d165

The tests that should fail are those four added in the patch, but it's
possible that more recently added tests will fail too.

You're correct that looking at ostensibly dead data structures to get
useful information is rather evil, and I would certainly be very pleased
if there were a better way of doing that.

If we went back to the way the line numbers were reported 11 years ago
the world wouldn't end, but we would probably start getting bug reports
about incorrect line numbers again.

--
Paul Johnson - paul@​pjcj.net
http​://www.pjcj.net

@p5pRT
Copy link
Author

p5pRT commented Jan 10, 2012

From @rurban

On Mon, Jan 9, 2012 at 2​:13 PM, Paul Johnson <paul@​pjcj.net> wrote​:

On Mon, Jan 09, 2012 at 05​:42​:55PM +0000, Nicholas Clark wrote​:

On Mon, Jan 09, 2012 at 05​:35​:53PM +0000, Zefram wrote​:

Nicholas Clark wrote​:

 certainly the ex-nextstate OPs are used to generate line numbers in
 warnings.

No, the line numbers come from PL_curcop, which is set by pp_nextstate.
An *ex-*nextstate op would only occur where that assignment has been
determined to be unnecessary, because it's immediately followed by
another nextstate.

I may have misremembered the details, but this bit of evil is live in
S_closest_cop() in util.c​:

          /* If the OP_NEXTSTATE has been optimised away we can still use it
           * the get the file and line number. */

          if (kid->op_type == OP_NULL && kid->op_targ == OP_NEXTSTATE)
              cop = (const COP *)kid;

          /* Keep searching, and return when we've found something. */

and IIRC take it out, and tests fail.

Now I'm not sure whether to feel proud or to slink off quietly into the
background.

This particular bit of evil is a little over 11 years old and stems from
me trying to get more accurate line numbers, primarily for Devel​::Cover.
You can see it in its full glory as ae7d165

The tests that should fail are those four added in the patch, but it's
possible that more recently added tests will fail too.

You're correct that looking at ostensibly dead data structures to get
useful information is rather evil, and I would certainly be very pleased
if there were a better way of doing that.

If we went back to the way the line numbers were reported 11 years ago
the world wouldn't end, but we would probably start getting bug reports
about incorrect line numbers again.

Dropping the nullified ex-cops at all will be easy with my
https://github.com/rurban/perl/tree/oplines approach,
moving op_line from the cop to each op.
In this case the file from ex-cops is not needed, only the line.

In my tree just the cop search in case of warnings were the line
number and file is needed should be revised. Not a big deal, but I
didn't finish the op_file part.
--
Reini Urban
http​://cpanel.net/   http​://www.perl-compiler.org/

@p5pRT
Copy link
Author

p5pRT commented Jan 10, 2012

From @timbunce

On Mon, Jan 09, 2012 at 09​:13​:33PM +0100, Paul Johnson wrote​:

On Mon, Jan 09, 2012 at 05​:42​:55PM +0000, Nicholas Clark wrote​:

I may have misremembered the details, but this bit of evil is live in
S_closest_cop() in util.c​:

    /\* If the OP\_NEXTSTATE has been optimised away we can still use it
     \* the get the file and line number\. \*/

    if \(kid\->op\_type == OP\_NULL && kid\->op\_targ == OP\_NEXTSTATE\)
    cop = \(const COP \*\)kid;

    /\* Keep searching\, and return when we've found something\. \*/

and IIRC take it out, and tests fail.

Now I'm not sure whether to feel proud or to slink off quietly into the
background.

This particular bit of evil is a little over 11 years old and stems from
me trying to get more accurate line numbers, primarily for Devel​::Cover.

The NYTProf statement profiler depends on the same logic.
(Actually it has a copy of something like S_closest_cop in the source.)

You're correct that looking at ostensibly dead data structures to get
useful information is rather evil, and I would certainly be very pleased
if there were a better way of doing that.

Ditto.

If we went back to the way the line numbers were reported 11 years ago
the world wouldn't end, but we would probably start getting bug reports
about incorrect line numbers again.

And wrongly apportioned profile time.

Tim.

@p5pRT
Copy link
Author

p5pRT commented Apr 23, 2012

From @cpansprout

On Mon Jan 09 12​:14​:15 2012, paul@​pjcj.net wrote​:

Now I'm not sure whether to feel proud or to slink off quietly into the
background.

This particular bit of evil is a little over 11 years old and stems from
me trying to get more accurate line numbers, primarily for Devel​::Cover.
You can see it in its full glory as
ae7d165

Now we just need to propagate those same line numbers into caller
somehow, at least when it is called from __DIE__. See bug #111982.

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Apr 23, 2012

From [Unknown Contact. See original ticket]

On Mon Jan 09 12​:14​:15 2012, paul@​pjcj.net wrote​:

Now I'm not sure whether to feel proud or to slink off quietly into the
background.

This particular bit of evil is a little over 11 years old and stems from
me trying to get more accurate line numbers, primarily for Devel​::Cover.
You can see it in its full glory as
ae7d165

Now we just need to propagate those same line numbers into caller
somehow, at least when it is called from __DIE__. See bug #111982.

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Jun 29, 2012

From @cpansprout

On Mon Jan 09 09​:31​:19 2012, nicholas wrote​:

On Sun, Jan 08, 2012 at 04​:02​:57PM +0000, Dave Mitchell wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that???s the case, it???s probably not worthwhile plugging
something so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had forgotten about this proposal when I started writing a slab
allocator (see c5fb998 and 8be227a).

When I first read it, it wast mostly jargon and I didn’t understand it.

Now that I read it again, I realise I have reinvented it quite closely.
:-) I even resolved Zefram’s issue about fragmentation by using a free
chain.

I had a hunch that an approach that might work to fix this is to

a​: always use the slab allocator code for OPs

I started with the existing slab allocator but found myself rewriting it
completely.

b​: use a new slab when starting each new compilation
(and by implication, track if this compilation needs more than one
slab)

Each op slab is attached to PL_compcv and has an opslab_next pointer,
allowing multiple slabs.

c​: clean up the *slab* if the compilation fails

cv_undef now does that if the CvSLABBED flag is set, but not CvROOT.

but (b) might increase memory use (ends of partial slabs)
(although *this* could be avoided by copying OPs during the optimiser,
which
could also re-order them to execution order, and drop NULL OPs*)

This I partially solved by making the first slab quite small (about
seven ops), and increasing the size for each subsequent slab. It could
possibly use some improvement.

and (c) might require *removing* current checks which attempt to free
OPs
on failure, to avoid double frees.

Ops on slabs are now marked as being freed (op_type == OP_FREED). After
an error, when the slab is cleaned up, those are skipped.

It strikes me as a big project.

Not as big as the re-eval rewrite. :-)

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Jun 29, 2012

From [Unknown Contact. See original ticket]

On Mon Jan 09 09​:31​:19 2012, nicholas wrote​:

On Sun, Jan 08, 2012 at 04​:02​:57PM +0000, Dave Mitchell wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that???s the case, it???s probably not worthwhile plugging
something so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had forgotten about this proposal when I started writing a slab
allocator (see c5fb998 and 8be227a).

When I first read it, it wast mostly jargon and I didn’t understand it.

Now that I read it again, I realise I have reinvented it quite closely.
:-) I even resolved Zefram’s issue about fragmentation by using a free
chain.

I had a hunch that an approach that might work to fix this is to

a​: always use the slab allocator code for OPs

I started with the existing slab allocator but found myself rewriting it
completely.

b​: use a new slab when starting each new compilation
(and by implication, track if this compilation needs more than one
slab)

Each op slab is attached to PL_compcv and has an opslab_next pointer,
allowing multiple slabs.

c​: clean up the *slab* if the compilation fails

cv_undef now does that if the CvSLABBED flag is set, but not CvROOT.

but (b) might increase memory use (ends of partial slabs)
(although *this* could be avoided by copying OPs during the optimiser,
which
could also re-order them to execution order, and drop NULL OPs*)

This I partially solved by making the first slab quite small (about
seven ops), and increasing the size for each subsequent slab. It could
possibly use some improvement.

and (c) might require *removing* current checks which attempt to free
OPs
on failure, to avoid double frees.

Ops on slabs are now marked as being freed (op_type == OP_FREED). After
an error, when the slab is cleaned up, those are skipped.

It strikes me as a big project.

Not as big as the re-eval rewrite. :-)

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Sep 23, 2012

From @cpansprout

On Sun Jan 08 08​:03​:21 2012, davem wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something so rare
as this.

failed evals are known to leak OPs during compilation (and this is hard to
fix). They are not supposed to leak SVs, unless possibly those are SVs are
attached to OPs.

I had a go at it again, but I am running into problems with newSVsv.
Specifically, newSVsv(sv) on an sv that dies on FETCH is always going to
leak the new SV.

There are probably many uses of newSVsv that suffer from this.

Is the solution to add newSVsv_flags that takes the SVs_TEMP flag?

--

Father Chrysostomos

"${;s/.*/Just an";
other Perl hacker,
/s} die or return;
print

@p5pRT
Copy link
Author

p5pRT commented Sep 24, 2012

From @cpansprout

On Sun Sep 23 06​:55​:05 2012, sprout wrote​:

On Sun Jan 08 08​:03​:21 2012, davem wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something
so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had a go at it again, but I am running into problems with newSVsv.
Specifically, newSVsv(sv) on an sv that dies on FETCH is always going to
leak the new SV.

There are probably many uses of newSVsv that suffer from this.

Is the solution to add newSVsv_flags that takes the SVs_TEMP flag?

No. Only a moron would suggest that. The solution is obviously to make
newSVsv call get-magic before creating the new SV.

This, and hint-hash leaking in general (the subject of this ticket), are
now fixed in commit 0db511c.

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Sep 24, 2012

@cpansprout - Status changed from 'open' to 'resolved'

@p5pRT p5pRT closed this as completed Sep 24, 2012
@p5pRT
Copy link
Author

p5pRT commented Sep 24, 2012

From @ppisar

On 2012-09-24, Father Chrysostomos via RT <perlbug-followup@​perl.org> wrote​:

On Sun Sep 23 06​:55​:05 2012, sprout wrote​:

On Sun Jan 08 08​:03​:21 2012, davem wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something
so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had a go at it again, but I am running into problems with newSVsv.
Specifically, newSVsv(sv) on an sv that dies on FETCH is always going to
leak the new SV.

There are probably many uses of newSVsv that suffer from this.

Is the solution to add newSVsv_flags that takes the SVs_TEMP flag?

No. Only a moron would suggest that. The solution is obviously to make
newSVsv call get-magic before creating the new SV.

This, and hint-hash leaking in general (the subject of this ticket), are
now fixed in commit 0db511c.

Really? I've just compiled the blead and the test fails. The same happens
when applying the patch to 5.16.1. Running the code through
Test​::LeakTrace reveales two leaks have been clogged, but other ones
remain.

-- Petr

@p5pRT
Copy link
Author

p5pRT commented Sep 24, 2012

From @cpansprout

On Mon Sep 24 05​:09​:58 2012, ppisar wrote​:

On 2012-09-24, Father Chrysostomos via RT <perlbug-followup@​perl.org>
wrote​:

On Sun Sep 23 06​:55​:05 2012, sprout wrote​:

On Sun Jan 08 08​:03​:21 2012, davem wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something
so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had a go at it again, but I am running into problems with newSVsv.
Specifically, newSVsv(sv) on an sv that dies on FETCH is always
going to
leak the new SV.

There are probably many uses of newSVsv that suffer from this.

Is the solution to add newSVsv_flags that takes the SVs_TEMP flag?

No. Only a moron would suggest that. The solution is obviously to make
newSVsv call get-magic before creating the new SV.

This, and hint-hash leaking in general (the subject of this ticket), are
now fixed in commit 0db511c.

Really? I've just compiled the blead and the test fails. The same happens
when applying the patch to 5.16.1. Running the code through
Test​::LeakTrace reveales two leaks have been clogged, but other ones
remain.

I tested it with threads+mad and with neither. What configuration are
you using?

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Sep 24, 2012

From [Unknown Contact. See original ticket]

On Mon Sep 24 05​:09​:58 2012, ppisar wrote​:

On 2012-09-24, Father Chrysostomos via RT <perlbug-followup@​perl.org>
wrote​:

On Sun Sep 23 06​:55​:05 2012, sprout wrote​:

On Sun Jan 08 08​:03​:21 2012, davem wrote​:

On Wed, Jan 04, 2012 at 11​:14​:30PM -0800, Father Chrysostomos via RT
wrote​:

This patch only reduces the number to 3, so we have other things
leaking. Do failed evals leak in general? Is that unfixable? If
that’s the case, it’s probably not worthwhile plugging something
so rare
as this.

failed evals are known to leak OPs during compilation (and this is
hard to
fix). They are not supposed to leak SVs, unless possibly those are
SVs are
attached to OPs.

I had a go at it again, but I am running into problems with newSVsv.
Specifically, newSVsv(sv) on an sv that dies on FETCH is always
going to
leak the new SV.

There are probably many uses of newSVsv that suffer from this.

Is the solution to add newSVsv_flags that takes the SVs_TEMP flag?

No. Only a moron would suggest that. The solution is obviously to make
newSVsv call get-magic before creating the new SV.

This, and hint-hash leaking in general (the subject of this ticket), are
now fixed in commit 0db511c.

Really? I've just compiled the blead and the test fails. The same happens
when applying the patch to 5.16.1. Running the code through
Test​::LeakTrace reveales two leaks have been clogged, but other ones
remain.

I tested it with threads+mad and with neither. What configuration are
you using?

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented Sep 25, 2012

From @ppisar

On 2012-09-24, Father Chrysostomos via RT <perlbug-comment@​perl.org>
wrote​:

On Mon Sep 24 05​:09​:58 2012, ppisar wrote​:

On 2012-09-24, Father Chrysostomos via RT <perlbug-followup@​perl.org>
wrote​:

This, and hint-hash leaking in general (the subject of this
ticket), are now fixed in commit 0db511c.

Really? I've just compiled the blead and the test fails. The same
happens when applying the patch to 5.16.1. Running the code through
Test​::LeakTrace reveales two leaks have been clogged, but other ones
remain.

I tested it with threads+mad and with neither. What configuration are
you using?

I pruned git working directory and rebuilt blead again. Now the test
passes. Makefile does not track all dependencies probably.

The perl-5.16.1 flaw turned out another patch
895cdc8 is necessary too.

So current code is Ok.

-- Petr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant