Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hash keys are limited to 2 GB #15237

Closed
p5pRT opened this issue Mar 18, 2016 · 17 comments
Closed

Hash keys are limited to 2 GB #15237

p5pRT opened this issue Mar 18, 2016 · 17 comments

Comments

@p5pRT
Copy link

p5pRT commented Mar 18, 2016

Migrated from rt.perl.org#127742 (status was 'open')

Searchable as RT127742$

@p5pRT
Copy link
Author

p5pRT commented Mar 18, 2016

From @arc

It is not currently possible to use a string of 2**31 bytes (or larger) as a hash key, even on a 64-bit system.

Furthermore, trying to do so seems to cause invalid length calculations, resulting in an overlarge allocation and a panic in malloc​:

$ ./perl -le 'print $^V'
v5.23.9
$ ./perl -e '+{ "x" x 2**31, undef }'
panic​: malloc, size=18446744071562068026 at -e line 1.

The attached patch tries to remedy at least the second of those problems, by detecting the situation and emitting a diagnostic that explicitly says what's happening. However, it is incomplete; I got as far as working out that, to make OP_MULTIDEREF work, we'd need to change the argument types of newSVpvn_share() (part of the public API), and decided that this wasn't something I wanted to tackle during code freeze. But I think it would probably be worth looking into this during the 5.25.x cycle.

--
Aaron Crane ** http​://aaroncrane.co.uk/

@p5pRT
Copy link
Author

p5pRT commented Mar 18, 2016

From @arc

0001-Better-handling-for-huge-hash-keys.patch
From ff8fb2afb9808d99fc7f2a3704df927958492ab8 Mon Sep 17 00:00:00 2001
From: Aaron Crane <arc@cpan.org>
Date: Fri, 18 Mar 2016 16:56:43 +0000
Subject: [PATCH] Better handling for huge hash keys

We currently require hash keys to be less than 2**31 bytes long. But (a)
nothing actually tries to enforce that, and (b) if a Perl program tries to
create a hash with such a key (using a 64-bit system), we miscalculate the
size of a memory block, yielding a panic:

$ ./perl -e '+{ "x" x 2**31, undef }'
panic: malloc, size=18446744071562068026 at -e line 1.

Instead, check for this situation, and croak with an appropriate (new)
diagnostic in the unlikely event that it occurs.

This also involves changing the type of an argument to a public API function:
Perl_share_hek() previously took the key's length as an I32, but that makes
it impossible to detect over-long keys, so it must be SSize_t instead.
---
 embed.fnc        |  4 ++--
 hv.c             | 10 +++++++---
 pod/perldiag.pod |  6 ++++++
 proto.h          |  4 ++--
 t/bigmem/hash.t  | 33 +++++++++++++++++++++++++++++++++
 5 files changed, 50 insertions(+), 7 deletions(-)
 create mode 100644 t/bigmem/hash.t

diff --git a/embed.fnc b/embed.fnc
index 049f6c1..87c2855 100644
--- a/embed.fnc
+++ b/embed.fnc
@@ -1331,7 +1331,7 @@ AMpd	|OP*	|op_scope	|NULLOK OP* o
 : Only used by perl.c/miniperl.c, but defined in caretx.c
 px	|void	|set_caret_X
 Apd	|void	|setdefout	|NN GV* gv
-Ap	|HEK*	|share_hek	|NN const char* str|I32 len|U32 hash
+Ap	|HEK*	|share_hek	|NN const char* str|SSize_t len|U32 hash
 #if defined(HAS_SIGACTION) && defined(SA_SIGINFO)
 : Used in perl.c
 np	|Signal_t |sighandler	|int sig|NULLOK siginfo_t *info|NULLOK void *uap
@@ -1926,7 +1926,7 @@ sa	|HE*	|new_he
 sanR	|HEK*	|save_hek_flags	|NN const char *str|I32 len|U32 hash|int flags
 sn	|void	|hv_magic_check	|NN HV *hv|NN bool *needs_copy|NN bool *needs_store
 s	|void	|unshare_hek_or_pvn|NULLOK const HEK* hek|NULLOK const char* str|I32 len|U32 hash
-sR	|HEK*	|share_hek_flags|NN const char *str|I32 len|U32 hash|int flags
+sR	|HEK*	|share_hek_flags|NN const char *str|STRLEN len|U32 hash|int flags
 rs	|void	|hv_notallowed	|int flags|NN const char *key|I32 klen|NN const char *msg
 in	|U32|ptr_hash|PTRV u
 s	|struct xpvhv_aux*|hv_auxinit|NN HV *hv
diff --git a/hv.c b/hv.c
index 7b5ad95..9df211e 100644
--- a/hv.c
+++ b/hv.c
@@ -2922,7 +2922,7 @@ S_unshare_hek_or_pvn(pTHX_ const HEK *hek, const char *str, I32 len, U32 hash)
  * len and hash must both be valid for str.
  */
 HEK *
-Perl_share_hek(pTHX_ const char *str, I32 len, U32 hash)
+Perl_share_hek(pTHX_ const char *str, SSize_t len, U32 hash)
 {
     bool is_utf8 = FALSE;
     int flags = 0;
@@ -2954,7 +2954,7 @@ Perl_share_hek(pTHX_ const char *str, I32 len, U32 hash)
 }
 
 STATIC HEK *
-S_share_hek_flags(pTHX_ const char *str, I32 len, U32 hash, int flags)
+S_share_hek_flags(pTHX_ const char *str, STRLEN len, U32 hash, int flags)
 {
     HE *entry;
     const int flags_masked = flags & HVhek_MASK;
@@ -2963,6 +2963,10 @@ S_share_hek_flags(pTHX_ const char *str, I32 len, U32 hash, int flags)
 
     PERL_ARGS_ASSERT_SHARE_HEK_FLAGS;
 
+    if (UNLIKELY(len > (STRLEN) I32_MAX)) {
+        Perl_croak_nocontext("Sorry, hash keys must be smaller than 2**31 bytes");
+    }
+
     /* what follows is the moral equivalent of:
 
     if (!(Svp = hv_fetch(PL_strtab, str, len, FALSE)))
@@ -2977,7 +2981,7 @@ S_share_hek_flags(pTHX_ const char *str, I32 len, U32 hash, int flags)
     for (;entry; entry = HeNEXT(entry)) {
 	if (HeHASH(entry) != hash)		/* strings can't be equal */
 	    continue;
-	if (HeKLEN(entry) != len)
+	if (HeKLEN(entry) != (SSize_t) len)
 	    continue;
 	if (HeKEY(entry) != str && memNE(HeKEY(entry),str,len))	/* is this it? */
 	    continue;
diff --git a/pod/perldiag.pod b/pod/perldiag.pod
index b0106f0..e2d1f80 100644
--- a/pod/perldiag.pod
+++ b/pod/perldiag.pod
@@ -5525,6 +5525,12 @@ overhauled.
 (F) An ancient error message that almost nobody ever runs into anymore.
 But before sort was a keyword, people sometimes used it as a filehandle.
 
+=item Sorry, hash keys must be smaller than 2**31 bytes
+
+(F) You tried to create a hash containing a very large key, where "very
+large" means that it needs at least 2 gigabytes to store. Unfortunately,
+Perl doesn't yet handle such large hash keys.
+
 =item Source filters apply only to byte streams
 
 (F) You tried to activate a source filter (usually by loading a
diff --git a/proto.h b/proto.h
index 8807867..b0e17b1 100644
--- a/proto.h
+++ b/proto.h
@@ -2792,7 +2792,7 @@ PERL_CALLCONV void	Perl_set_numeric_standard(pTHX);
 PERL_CALLCONV void	Perl_setdefout(pTHX_ GV* gv);
 #define PERL_ARGS_ASSERT_SETDEFOUT	\
 	assert(gv)
-PERL_CALLCONV HEK*	Perl_share_hek(pTHX_ const char* str, I32 len, U32 hash);
+PERL_CALLCONV HEK*	Perl_share_hek(pTHX_ const char* str, SSize_t len, U32 hash);
 #define PERL_ARGS_ASSERT_SHARE_HEK	\
 	assert(str)
 PERL_CALLCONV void	Perl_sortsv(pTHX_ SV** array, size_t num_elts, SVCOMPARE_t cmp);
@@ -4269,7 +4269,7 @@ STATIC HEK*	S_save_hek_flags(const char *str, I32 len, U32 hash, int flags)
 #define PERL_ARGS_ASSERT_SAVE_HEK_FLAGS	\
 	assert(str)
 
-STATIC HEK*	S_share_hek_flags(pTHX_ const char *str, I32 len, U32 hash, int flags)
+STATIC HEK*	S_share_hek_flags(pTHX_ const char *str, STRLEN len, U32 hash, int flags)
 			__attribute__warn_unused_result__;
 #define PERL_ARGS_ASSERT_SHARE_HEK_FLAGS	\
 	assert(str)
diff --git a/t/bigmem/hash.t b/t/bigmem/hash.t
new file mode 100644
index 0000000..e3d2980
--- /dev/null
+++ b/t/bigmem/hash.t
@@ -0,0 +1,33 @@
+#!perl
+BEGIN {
+    chdir 't' if -d 't';
+    @INC = "../lib";
+    require './test.pl';
+}
+
+use Config qw(%Config);
+
+$ENV{PERL_TEST_MEMORY} >= 4
+    or skip_all("Need ~4Gb for this test");
+$Config{ptrsize} >= 8
+    or skip_all("Need 64-bit pointers for this test");
+
+plan(2);
+
+sub exn {
+    my ($code_string) = @_;
+    local $@;
+    return undef if eval "do { $code_string }; 1";
+    return $@;
+}
+
+like(exn('my $h = { "x" x 2**31, undef }'),
+     qr/^\QSorry, hash keys must be smaller than 2**31 bytes\E\b/,
+     "hash constructed with huge key");
+
+TODO: {
+    local $TODO = "Doesn't yet work with OP_MULTIDEREF";
+    like(exn('my %h; %h{ "x" x 2**31 } = undef'),
+         qr/^\QSorry, hash keys must be smaller than 2**31 bytes\E\b/,
+         "assign to huge hash key");
+}
-- 
2.7.2

@p5pRT
Copy link
Author

p5pRT commented Mar 20, 2016

From @bulk88

On Fri Mar 18 11​:10​:21 2016, arc wrote​:

The attached patch tries to remedy at least the second of those
problems, by detecting the situation and emitting a diagnostic that
explicitly says what's happening. However, it is incomplete; I got as
far as working out that, to make OP_MULTIDEREF work, we'd need to
change the argument types of newSVpvn_share() (part of the public
API), and decided that this wasn't something I wanted to tackle during
code freeze. But I think it would probably be worth looking into this
during the 5.25.x cycle.

I think its a waste of memory to allow hash keys >2GB or >4GB. Keep the length 32 bit. Trying to dedup >2GB strings with a perl hash sounds like a programmer error unless the machine has dozens of TBs of RAM and 100s of TBs of SSDs.

--
bulk88 ~ bulk88 at hotmail.com

@p5pRT
Copy link
Author

p5pRT commented Mar 20, 2016

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented May 16, 2017

From @jkeenan

On Sun, 20 Mar 2016 15​:51​:29 GMT, bulk88 wrote​:

On Fri Mar 18 11​:10​:21 2016, arc wrote​:

The attached patch tries to remedy at least the second of those
problems, by detecting the situation and emitting a diagnostic that
explicitly says what's happening. However, it is incomplete; I got as
far as working out that, to make OP_MULTIDEREF work, we'd need to
change the argument types of newSVpvn_share() (part of the public
API), and decided that this wasn't something I wanted to tackle
during
code freeze. But I think it would probably be worth looking into this
during the 5.25.x cycle.

I think its a waste of memory to allow hash keys >2GB or >4GB. Keep
the length 32 bit. Trying to dedup >2GB strings with a perl hash
sounds like a programmer error unless the machine has dozens of TBs of
RAM and 100s of TBs of SSDs.

I concur with bulk88. Is there any *reasonable* case -- not merely a curiosity -- for hash keys > 2GB?

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented May 16, 2017

From zefram@fysh.org

James E Keenan via RT wrote​:

I concur with bulk88. Is there any *reasonable* case -- not merely a
curiosity -- for hash keys > 2GB?

If there's any case for strings >2GB, then there's a case for hash keys
that length. And we've already determined to support strings that long.
Having two different string length limits would be madness.

-zefram

@p5pRT
Copy link
Author

p5pRT commented May 16, 2017

From @demerphq

On 16 May 2017 at 03​:06, Zefram <zefram@​fysh.org> wrote​:

James E Keenan via RT wrote​:

I concur with bulk88. Is there any *reasonable* case -- not merely a
curiosity -- for hash keys > 2GB?

If there's any case for strings >2GB, then there's a case for hash keys
that length. And we've already determined to support strings that long.
Having two different string length limits would be madness.

Agreed; especially when you consider that the length data is stored
once per unique key.

Also some of the "short string" hacks might be applicable here.
Perhaps we could have our cake and eat it too. (IOW, use the 8 bytes
for the key itself in short key situations.)

There are all kinds of trade offs here to consider, but IMO wasting
some bytes to not worry about restrictions on key length is a
reasonable one to make.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From @dur-randir

2017-05-16 9​:49 GMT+03​:00 demerphq <demerphq@​gmail.com>​:

but IMO wasting
some bytes to not worry about restrictions on key length is a
reasonable one to make.

I'd disagree with this. Wasting some bytes seems small, but wasting
them for every hash key in every perl program from now on for a case
which maybe one or two persons will ever use? That's like a 1% price
in memory paid for nothing.

I think that the reasonable behavior would be just to croak() on
attempts to store such strings.

Best regards,
Sergey Aleynikov

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From @toddr

On May 17, 2017, at 4​:42 AM, Sergey Aleynikov <sergey.aleynikov@​gmail.com> wrote​:

2017-05-16 9​:49 GMT+03​:00 demerphq <demerphq@​gmail.com>​:

but IMO wasting
some bytes to not worry about restrictions on key length is a
reasonable one to make.

I'd disagree with this. Wasting some bytes seems small, but wasting
them for every hash key in every perl program from now on for a case
which maybe one or two persons will ever use? That's like a 1% price
in memory paid for nothing.

I think that the reasonable behavior would be just to croak() on
attempts to store such strings.

I'm glad you replied. This was well said and exactly what I couldn't figure out how to put into words. Anybody using heks that big should be using some sort of library or XS not Perl.

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From @toddr

On May 17, 2017, at 7​:56 AM, Todd E Rinaldo <toddr@​cpanel.net> wrote​:

On May 17, 2017, at 4​:42 AM, Sergey Aleynikov <sergey.aleynikov@​gmail.com> wrote​:

2017-05-16 9​:49 GMT+03​:00 demerphq <demerphq@​gmail.com>​:

but IMO wasting
some bytes to not worry about restrictions on key length is a
reasonable one to make.

I'd disagree with this. Wasting some bytes seems small, but wasting
them for every hash key in every perl program from now on for a case
which maybe one or two persons will ever use? That's like a 1% price
in memory paid for nothing.

I think that the reasonable behavior would be just to croak() on
attempts to store such strings.

I'm glad you replied. This was well said and exactly what I couldn't figure out how to put into words. Anybody using heks that big should be using some sort of library or XS not Perl.

I can imagine a scenario where I might want to mmap a 5GB file into a PV. I can not imagine a scenario where I'd want the file contents in a HEK.

Todd

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From zefram@fysh.org

Todd Rinaldo wrote​:

I can imagine a scenario where I might want to mmap a 5GB file into
a PV. I can not imagine a scenario where I'd want the file contents in
a HEK.

Using a hash to represent a set of strings. Specifically, construct a
set of the contents of multiple files. Putting things into sets is an
operation that's relevant to any kind of value.

-zefram

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From @demerphq

On 17 May 2017 18​:09, "Todd Rinaldo" <toddr@​cpanel.net> wrote​:

On May 17, 2017, at 7​:56 AM, Todd E Rinaldo <toddr@​cpanel.net> wrote​:

On May 17, 2017, at 4​:42 AM, Sergey Aleynikov <sergey.aleynikov@​gmail.com>
wrote​:

2017-05-16 9​:49 GMT+03​:00 demerphq <demerphq@​gmail.com>​:

but IMO wasting
some bytes to not worry about restrictions on key length is a
reasonable one to make.

I'd disagree with this. Wasting some bytes seems small, but wasting
them for every hash key in every perl program from now on for a case
which maybe one or two persons will ever use? That's like a 1% price
in memory paid for nothing.

I think that the reasonable behavior would be just to croak() on
attempts to store such strings.

I'm glad you replied. This was well said and exactly what I couldn't
figure out how to put into words. Anybody using heks that big should be
using some sort of library or XS not Perl.

I can imagine a scenario where I might want to mmap a 5GB file into a PV. I
can not imagine a scenario where I'd want the file contents in a HEK.

The fact strings and keys aren't completely normalised shouldn't be an
excuse to make them more different.

I don't have a problem with a build option supporting this if you want to
go through the trouble. I bet it will be more trouble than you think
however.

Yves

@p5pRT
Copy link
Author

p5pRT commented May 17, 2017

From @cpansprout

On Wed, 17 May 2017 09​:22​:59 -0700, zefram@​fysh.org wrote​:

Todd Rinaldo wrote​:

I can imagine a scenario where I might want to mmap a 5GB file into
a PV. I can not imagine a scenario where I'd want the file contents in
a HEK.

Using a hash to represent a set of strings. Specifically, construct a
set of the contents of multiple files. Putting things into sets is an
operation that's relevant to any kind of value.

Is it possible to reach a compromise between the two positions?

First, since it is quicker to do, have perl detect the situation it cannot handle, and croak with a better message (or a butter massage if you prefer).

Then, when someone who really wants 2GB-hek support gets around to writing a patch, we apply the patch *as long as* it does not significantly increase memory usage by making all heks larger. (One way to do this might be to put large hash key support in a separate module that gets loaded automatically when a large hash key is created. The hash could get uvar or some other magic attached automatically; it would not have to use heks for storing those long strings.)

--

Father Chrysostomos

@p5pRT
Copy link
Author

p5pRT commented May 18, 2017

From @toddr

On May 17, 2017, at 11​:22 AM, Zefram <zefram@​fysh.org> wrote​:

Todd Rinaldo wrote​:

I can imagine a scenario where I might want to mmap a 5GB file into
a PV. I can not imagine a scenario where I'd want the file contents in
a HEK.

Using a hash to represent a set of strings. Specifically, construct a
set of the contents of multiple files. Putting things into sets is an
operation that's relevant to any kind of value.

-zefram

Right so we're talking a 4-6GB program just to store the 2-3 file contents in HEKs. Sure you could do it. But should you? AFAIK you can't / shouldn't be mem mapping those files into a HEK so that's right out.

I'd question the time required to even generate the relevant hash for such a string. Not to mention the fact you're going to double your memory just to refer to those hash keys by putting their contents in a SVPV. Honestly the whole thing seems very contrived to me.

Todd

@p5pRT
Copy link
Author

p5pRT commented May 18, 2017

From @demerphq

On 18 May 2017 at 22​:21, Todd Rinaldo <toddr@​cpanel.net> wrote​:

On May 17, 2017, at 11​:22 AM, Zefram <zefram@​fysh.org> wrote​:

Todd Rinaldo wrote​:

I can imagine a scenario where I might want to mmap a 5GB file into
a PV. I can not imagine a scenario where I'd want the file contents in
a HEK.

Using a hash to represent a set of strings. Specifically, construct a
set of the contents of multiple files. Putting things into sets is an
operation that's relevant to any kind of value.

-zefram

Right so we're talking a 4-6GB program just to store the 2-3 file contents in HEKs. Sure you could do it. But should you? AFAIK you can't / shouldn't be mem mapping those files into a HEK so that's right out.

I'd question the time required to even generate the relevant hash for such a string. Not to mention the fact you're going to double your memory just to refer to those hash keys by putting their contents in a SVPV. Honestly the whole thing seems very contrived to me.

I misread the original bug. I thought you wanted to reduce it from 64
to 32 bits.

Until and unless we actually make it support 64 bit lengths, which I
cant see happening any time soon, we should definitely error.

The thing I am missing is why Aaron put the lenght check where he did.
Why not just put it in PERL_HASH()? That will catch any vector that
might want to hash an overlong string.

I think something like this will do once my hash cleanup patches are applied​:

-#define _PERL_HASH_WITH_STATE(state,str,len)
  \
- ((len <= SBOX32_MAX_LEN)
  \
- ? sbox32_hash_with_state((state +
__PERL_HASH_STATE_BYTES),(U8*)(str),(len)) \
- : __PERL_HASH_WITH_STATE((state),(str),(len)))
+#define _PERL_HASH_WITH_STATE(state,str,len)
  \
+ (LIKELY(len <= SBOX32_MAX_LEN)
  \
+ ? sbox32_hash_with_state((state +
__PERL_HASH_STATE_BYTES),(U8*)(str),(len)) \
+ : LIKELY(len < (STRLEN) I32_MAX)
  \
+ ? __PERL_HASH_WITH_STATE((state),(str),(len)) \
+ : Perl_croak_nocontext("Sorry, hash keys must be smaller
than 2**31 bytes"))

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jun 1, 2017

From @demerphq

On 19 May 2017 at 00​:05, demerphq <demerphq@​gmail.com> wrote​:

On 18 May 2017 at 22​:21, Todd Rinaldo <toddr@​cpanel.net> wrote​:

On May 17, 2017, at 11​:22 AM, Zefram <zefram@​fysh.org> wrote​:

Todd Rinaldo wrote​:

I can imagine a scenario where I might want to mmap a 5GB file into
a PV. I can not imagine a scenario where I'd want the file contents in
a HEK.

Using a hash to represent a set of strings. Specifically, construct a
set of the contents of multiple files. Putting things into sets is an
operation that's relevant to any kind of value.

-zefram

Right so we're talking a 4-6GB program just to store the 2-3 file contents in HEKs. Sure you could do it. But should you? AFAIK you can't / shouldn't be mem mapping those files into a HEK so that's right out.

I'd question the time required to even generate the relevant hash for such a string. Not to mention the fact you're going to double your memory just to refer to those hash keys by putting their contents in a SVPV. Honestly the whole thing seems very contrived to me.

I misread the original bug. I thought you wanted to reduce it from 64
to 32 bits.

Until and unless we actually make it support 64 bit lengths, which I
cant see happening any time soon, we should definitely error.

The thing I am missing is why Aaron put the lenght check where he did.
Why not just put it in PERL_HASH()? That will catch any vector that
might want to hash an overlong string.

I think something like this will do once my hash cleanup patches are applied​:

-#define _PERL_HASH_WITH_STATE(state,str,len)
\
- ((len <= SBOX32_MAX_LEN)
\
- ? sbox32_hash_with_state((state +
__PERL_HASH_STATE_BYTES),(U8*)(str),(len)) \
- : __PERL_HASH_WITH_STATE((state),(str),(len)))
+#define _PERL_HASH_WITH_STATE(state,str,len)
\
+ (LIKELY(len <= SBOX32_MAX_LEN)
\
+ ? sbox32_hash_with_state((state +
__PERL_HASH_STATE_BYTES),(U8*)(str),(len)) \
+ : LIKELY(len < (STRLEN) I32_MAX)
\
+ ? __PERL_HASH_WITH_STATE((state),(str),(len)) \
+ : Perl_croak_nocontext("Sorry, hash keys must be smaller
than 2**31 bytes"))

I have merged Aarons patch with mine and pushed it as

b332a97d8b83526c8664d70288943f830d351ae9

Thanks Aaron!

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Oct 14, 2019

From @toddr

I have merged Aarons patch with mine and pushed it as

b332a97d8b83526c8664d70288943f830d351ae9

Thanks Aaron!

Yves

The commit appears to be b02f364 and was included in 5.27.1. Should this case be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants