Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stat dies on valid filename not existing #13557

Closed
p5pRT opened this issue Jan 26, 2014 · 57 comments
Closed

stat dies on valid filename not existing #13557

p5pRT opened this issue Jan 26, 2014 · 57 comments

Comments

@p5pRT
Copy link

p5pRT commented Jan 26, 2014

Migrated from rt.perl.org#121085 (status was 'rejected')

Searchable as RT121085$

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From perl-diddler@tlinx.org

Created by perl-diddler@tlinx.org

# next line uses ^V^J after 'test' to create file w/NL @​ end of name
# note. This is a valid unix name.

touch "test
"
Ishtar​:/tmp> perl -MP -we 'use strict;
if (-e "test
") {
P "test exists";
}
'
test exists ## this works
Ishtar​:/tmp> perl -MP -we 'use strict;
if (-e "test2
") {
P "test exists";
}
'
Unsuccessful stat on filename containing newline at -e line 2.

## ^^ perl dies at this point.. not a valid response to a
non-existent filename as following test doesn't die​:

Ishtar​:/tmp> perl -MP -w -e 'use strict;
if (-e "test2"
) {
P "test exists";
}
'
#(no output)

Perl should not die on a stat call failure.

(FWIW, the file name was downloaded from a website that had
something like
<... src="image.jpg
">

The file downloaded correctly though I think the 'nl' was
stripped off, but for a program storing the output from
a download html prog "as is", it was checking to see if the file
had already been d/l'ed and was stored locally.

Note that it's existance is verified if the file exists -- it
is only if the file doesn't exist that perl throws an invalid
error message and dies.

Perl Info

Flags:
    category=core
    severity=medium

Site configuration information for perl 5.16.3:

Configured by law at Wed Jan 22 12:58:58 PST 2014.

Summary of my perl5 (revision 5 version 16 subversion 3) configuration:
   
  Platform:
    osname=linux, osvers=3.12.0-isht-van, archname=x86_64-linux-thread-multi-ld
    uname='linux ishtar 3.12.0-isht-van #1 smp preempt wed nov 13 16:50:51 pst 2013 x86_64 x86_64 x86_64 gnulinux '
    config_args=''
    hint=previous, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=define, use64bitall=define, uselongdouble=define
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
    optimize='-g -O2',
    cppflags='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64'
    ccversion='', gccversion='4.8.1 20130909 [gcc-4_8-branch revision 202388]', gccosandvers=''
    intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
    ivtype='long', ivsize=8, nvtype='long double', nvsize=16, Off_t='off_t', lseeksize=8
    alignbytes=16, prototype=define
  Linker and Libraries:
    ld='gcc', ldflags ='-g -fstack-protector -fPIC'
    libpth=/usr/lib64 /lib64
    libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
    perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc
    libc=/lib/libc-2.18.so, so=so, useshrplib=true, libperl=libperl-5.16.3.so
    gnulibc_version='2.18'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E -Wl,-rpath,/home/perl/perl-5.16.3/lib/x86_64-linux-thread-multi-ld/CORE'
    cccdlflags='-fPIC', lddlflags='-shared -g -O2 -fstack-protector -fPIC'

Locally applied patches:
    


@INC for perl 5.16.3:
    /home/law/bin/lib
    /home/perl/perl-5.16.3/lib/site/x86_64-linux-thread-multi-ld
    /home/perl/perl-5.16.3/lib/site
    /home/perl/perl-5.16.3/lib/x86_64-linux-thread-multi-ld
    /home/perl/perl-5.16.3/lib
    .


Environment for perl 5.16.3:
    HOME=/home/law
    LANG=en_US.utf8
    LANGUAGE (unset)
    LC_COLLATE=C
    LC_CTYPE=en_US.utf8
    LD_LIBRARY_PATH (unset)
    LOGDIR (unset)
    PATH=/home/law/bin/lib:/sbin:/home/law/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/opt/kde3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:.:/usr/lib/qt3/bin:/opt/dell/srvadmin/bin:/usr/sbin:/etc/local/func_lib:/home/law/lib
    PERL5OPT=-Mutf8 -CSA -I/home/law/bin/lib
    PERL_BADLANG (unset)
    SHELL=/bin/bash

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From victor@vsespb.ru

but this is documented behaviour - see perldiag and this answer
http​://stackoverflow.com/a/2652314/1625053

2014/1/26 Linda Walsh <perlbug-followup@​perl.org>​:

# New Ticket Created by Linda Walsh
# Please include the string​: [perl #121085]
# in the subject line of all future correspondence about this issue.
# <URL​: https://rt-archive.perl.org/perl5/Ticket/Display.html?id=121085 >

This is a bug report for perl from perl-diddler@​tlinx.org,
generated with the help of perlbug 1.39 running under perl 5.16.3.

-----------------------------------------------------------------
[Please describe your issue here]

# next line uses ^V^J after 'test' to create file w/NL @​ end of name
# note. This is a valid unix name.

touch "test
"
Ishtar​:/tmp> perl -MP -we 'use strict;
if (-e "test
") {
P "test exists";
}
'
test exists ## this works
Ishtar​:/tmp> perl -MP -we 'use strict;
if (-e "test2
") {
P "test exists";
}
'
Unsuccessful stat on filename containing newline at -e line 2.

## ^^ perl dies at this point.. not a valid response to a
non-existent filename as following test doesn't die​:

Ishtar​:/tmp> perl -MP -w -e 'use strict;
if (-e "test2"
) {
P "test exists";
}
'
#(no output)

Perl should not die on a stat call failure.

(FWIW, the file name was downloaded from a website that had
something like
<... src="image.jpg
">

The file downloaded correctly though I think the 'nl' was
stripped off, but for a program storing the output from
a download html prog "as is", it was checking to see if the file
had already been d/l'ed and was stored locally.

Note that it's existance is verified if the file exists -- it
is only if the file doesn't exist that perl throws an invalid
error message and dies.

[Please do not change anything below this line]
-----------------------------------------------------------------
---
Flags​:
category=core
severity=medium
---
Site configuration information for perl 5.16.3​:

Configured by law at Wed Jan 22 12​:58​:58 PST 2014.

Summary of my perl5 (revision 5 version 16 subversion 3) configuration​:

Platform​:
osname=linux, osvers=3.12.0-isht-van, archname=x86_64-linux-thread-multi-ld
uname='linux ishtar 3.12.0-isht-van #1 smp preempt wed nov 13 16​:50​:51 pst 2013 x86_64 x86_64 x86_64 gnulinux '
config_args=''
hint=previous, useposix=true, d_sigaction=define
useithreads=define, usemultiplicity=define
useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
use64bitint=define, use64bitall=define, uselongdouble=define
usemymalloc=n, bincompat5005=undef
Compiler​:
cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
optimize='-g -O2',
cppflags='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64'
ccversion='', gccversion='4.8.1 20130909 [gcc-4_8-branch revision 202388]', gccosandvers=''
intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
ivtype='long', ivsize=8, nvtype='long double', nvsize=16, Off_t='off_t', lseeksize=8
alignbytes=16, prototype=define
Linker and Libraries​:
ld='gcc', ldflags ='-g -fstack-protector -fPIC'
libpth=/usr/lib64 /lib64
libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc -lgdbm_compat
perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc
libc=/lib/libc-2.18.so, so=so, useshrplib=true, libperl=libperl-5.16.3.so
gnulibc_version='2.18'
Dynamic Linking​:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E -Wl,-rpath,/home/perl/perl-5.16.3/lib/x86_64-linux-thread-multi-ld/CORE'
cccdlflags='-fPIC', lddlflags='-shared -g -O2 -fstack-protector -fPIC'

Locally applied patches​:

---
@​INC for perl 5.16.3​:
/home/law/bin/lib
/home/perl/perl-5.16.3/lib/site/x86_64-linux-thread-multi-ld
/home/perl/perl-5.16.3/lib/site
/home/perl/perl-5.16.3/lib/x86_64-linux-thread-multi-ld
/home/perl/perl-5.16.3/lib
.

---
Environment for perl 5.16.3​:
HOME=/home/law
LANG=en_US.utf8
LANGUAGE (unset)
LC_COLLATE=C
LC_CTYPE=en_US.utf8
LD_LIBRARY_PATH (unset)
LOGDIR (unset)
PATH=/home/law/bin/lib​:/sbin​:/home/law/bin​:/usr/local/bin​:/usr/bin​:/bin​:/usr/X11R6/bin​:/usr/games​:/opt/kde3/bin​:/usr/lib/mit/bin​:/usr/lib/mit/sbin​:.​:/usr/lib/qt3/bin​:/opt/dell/srvadmin/bin​:/usr/sbin​:/etc/local/func_lib​:/home/law/lib
PERL5OPT=-Mutf8 -CSA -I/home/law/bin/lib
PERL_BADLANG (unset)
SHELL=/bin/bash

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From perl-diddler@tlinx.org

On Sun Jan 26 03​:57​:38 2014, vsespb wrote​:

but this is documented behaviour - see perldiag and this answer
http​://stackoverflow.com/a/2652314/1625053
....
There is some evidence that &CORE​::stat mtime might be broken with some combinations of OS patchlevel and ActiveState Perl versions - a suggested workaround is to use the File​::stat module like so​:
my $sb = stat($File​::Find​::name);
my $mtime = scalar localtime $sb->mtime;

you might find File​::stat's object representation to be more convenient than the list returned by CORE​::stat.


So you are saying that CORE​::stat is broken, but it's ok because it is documented and there is a workaround?

Um... You know what they say about "assume". You make an ass out of U and me? Following the specifications is far preferable to broken CORE functions.

The fact of the matter was that I was replicating a valid file name, and perl can't test for that file names existence w/o dying.

It's also the case that the treatment of the file name is "passed" if the file exists, but breaking a core function to put in a ***faulty***, fatal error is just ridiculous.

If a user wants their filenames sanitized, a pragma might be a valid solution, but invalidating legal paths is still broken -- documented or not -- perl can't safely work with filenames because someone might get a bug up their bonnet about some OTHER character.

I've seen some POSIX folks suggest that any non-printing character should be illegal (including space). If someone *wants* to santize their names, they can -- but forcing things on people because someone else thinks it is "good for them", has resulted in alot of different kinds of abuse over human history.

Carving a special exception for perl to allow forcing someone's idea of right is no better than any other such situation.

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From @rjbs

~$ perl -wE 'use strict;
if (-e "test2
") {
say "test exists";
}
say "and then keep running"
';

Unsuccessful stat on filename containing newline at -e line 2.
and then keep running

Perl does not die. It warns. This is documented and not a bug.

--
rjbs

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

@rjbs - Status changed from 'open' to 'rejected'

@p5pRT p5pRT closed this as completed Jan 26, 2014
@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From perl-diddler@tlinx.org

On Sun Jan 26 04​:28​:48 2014, rjbs wrote​:

~$ perl -wE 'use strict;
if (-e "test2
") {
say "test exists";
}
say "and then keep running"
';

Unsuccessful stat on filename containing newline at -e line 2.
and then keep running

Perl does not die. It warns. This is documented and not a bug.


In many programs, warnings are not allowed.

Any warnings are considered errors.

If a developer or user wants a pragma to check for such things, then "buy-in" would be the way to go... then it doesn't disturb valid usage.

@p5pRT
Copy link
Author

p5pRT commented Jan 26, 2014

From @karenetheridge

On Sun, Jan 26, 2014 at 04​:35​:52AM -0800, Linda Walsh via RT wrote​:

In many programs, warnings are not allowed.
Any warnings are considered errors.

Developers must understand the ramifications of 'use warnings FATAL => "all"'
or by using the -w option.

If a developer or user wants a pragma to check for such things, then "buy-in" would be the way to go... then it doesn't disturb valid usage.

See perldoc perldiag - it describes how to suppress the warning for this
particular case.

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @demerphq

On 26 January 2014 22​:46, Karen Etheridge <perl@​froods.org> wrote​:

On Sun, Jan 26, 2014 at 04​:35​:52AM -0800, Linda Walsh via RT wrote​:

In many programs, warnings are not allowed.
Any warnings are considered errors.

Developers must understand the ramifications of 'use warnings FATAL => "all"'
or by using the -w option.

Indeed. One of the ramifications of warnings FATAL is that programs
which do not die on an older Perl may die on a later Perl release due
to new warnings being added.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From perl-diddler@tlinx.org

On Sun Jan 26 13​:47​:19 2014, perl@​froods.org wrote​:

Developers must understand the ramifications of 'use warnings FATAL =>
"all"'
or by using the -w option.


  Indeed. One must look at reasons why they do things as well.

  My background is kernel and security programming. It is common practice to ensure the kernel compiles with no warnings.

  This has roots in the security "best practices" as recommended by CERT @​
  https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practices

  I refer specifically to this section​:

***Heed compiler warnings.***

  *** Compile code using the ___highest warning level available for your
  *** compiler and eliminate warnings by modifying the code
  *** [C MSC00-A, C++ MSC00-A]


  In an update to these notes, @​
  https://www.securecoding.cert.org/confluence/display/seccode/MSC00-C.+Compile+cleanly+at+high+warning+levels , it says​:

* MSC00-C. Compile cleanly at high warning levels
  - Added by Robert C. Seacord, last edited by Carol J. Lallier on Oct 25, 2013
  Compile code using the highest warning level available for your
  compiler and eliminate warnings by modifying the code.

  ....

  MSC00-EX1​: Compilers can produce diagnostic messages for correct code, ...
  [however]...

  *** Do not simply quiet warnings***

  ...Instead, understand the reason for the warning and consider
  a better approach, such as using matching types and avoiding
  type casts whenever possible.


  As a result of security "best practice advice" for the past 10-15 years, it
has been my habit to always enable warnings and to treat them as fatal errors.

  Not doing so would go against "best practices" for secure software.

  That many in the perl community disagree with enabling warnings and treating
them as fatal errors, demonstrates a lack of experience and knowledge in
security practice. While a lack of knowledge is not a big deal (it is
fixable).

  Willful adherence to ignorance and bad practice is a 1st order danger to the success of a project or product.

  Best practices specifically advise to compile with the highest level of warnings turned on, treat warnings as ERRORS, and only ship production code that runs without errors or warnings.

 
  Comments from perl leadership, established developers et al, indicate they have no problem suddenly turning on new warnings that can cause established code in production to fail.

  The effect of such a policy is to ensure that the shipped code in a language that can generate new warnings on any major update, is to dissuade best security practices. Those responsible for such policies are ultimate responsible for a generally lowering of good security practice in those who use such code.

  To advise that treating "warnings-as-errors", and to design and develop software that encourages bad security practice is contributing to to a lowering of security practice in the users of such. Indeed, as evidence of this , one look at how many CPAN projects build and install w/no warnings.
 

As a core language developer wrote​:
  "Indeed. One of the ramifications of "/warnings FATAL/"
  is that programs which do not die on an older Perl may
  die on a later Perl release due to new warnings being added. "

  In other words, if one tries to follow best security practices - just to the
extent of using pragmas that make all warnings fatal , then due to the current
design and maintenance team's practices, programs & products in in the field
may die w/o notice (or deprecation cycle), w/a new release of perl.

  Incompatible programming changes were supposed to be preceeded by
announcements and a deprecation cycle -- with new "features (like new
warnings)" being activated only on an opt-in basis. However, the current
maintenance team has shown they are unable to even follow this practice,
enabling new "warning features" w/o notice, and even activating warnings in
"valid code", processing "valid data" in a misguided example of paternalism.

  The current maintenance team is obviously free to continue along their path
and likly will continue to ignore any input contrary to what they want to hear,
including citations of software & security "best practices". They main
continue to implement policies punish good software practice though this will
continue to have consequences for the future of this product.

  I would point out that a more egregious example of turning off warnings and
errors has to do with blanket filtering out o(or turning off) people who generate warnings and errors who have some minimal experience in such areas. I see this as a worse problem -- in that it is a 2nd-order level of willful
ignorance. Not only is there a refusal to learn, but extended to disallowing even potential sources of alternate opinions.

  This makes for a more toxic environment -- affecting not just a current
revision of a product, but it's entire future. It becomes like compount
interest in how it's effects grow over time. Eventually, the side effects of
such policies become too large to ignore. Creators, of such projects have even
been known to try to "restart/reset" the project with new and incompatible
designs. Sometimes these have succeeded and sometimes not. What is clear is
that bad software practice often generates more of the same.

 
  I hope I have been sufficiently clear such that you know how "aware" I am of
the "ramifications" of using warnings->FATAL and as well as the ramifications
of NOT doing so.

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @demerphq

On 27 January 2014 11​:36, Linda Walsh via RT <perlbug-followup@​perl.org> wrote​:

On Sun Jan 26 13​:47​:19 2014, perl@​froods.org wrote​:

Developers must understand the ramifications of 'use warnings FATAL =>
"all"'
or by using the -w option.
-----

Indeed. One must look at reasons why they do things as well.

My background is kernel and security programming. It is common practice to ensure the kernel compiles with no warnings.

This has roots in the security "best practices" as recommended by CERT @​
https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practices

I refer specifically to this section​:

***Heed compiler warnings.***

*** Compile code using the ___highest warning level available for your
*** compiler and eliminate warnings by modifying the code
*** [C MSC00-A, C++ MSC00-A]
---

In an update to these notes, @​
https://www.securecoding.cert.org/confluence/display/seccode/MSC00-C.+Compile+cleanly+at+high+warning+levels , it says​:

* MSC00-C. Compile cleanly at high warning levels
- Added by Robert C. Seacord, last edited by Carol J. Lallier on Oct 25, 2013
Compile code using the highest warning level available for your
compiler and eliminate warnings by modifying the code.

....

MSC00-EX1​: Compilers can produce diagnostic messages for correct code, ...
[however]...

*** Do not simply quiet warnings***

...Instead, understand the reason for the warning and consider
a better approach, such as using matching types and avoiding
type casts whenever possible.

----

As a result of security "best practice advice" for the past 10-15 years, it
has been my habit to always enable warnings and to treat them as fatal errors.

Not doing so would go against "best practices" for secure software.

That many in the perl community disagree with enabling warnings and treating
them as fatal errors, demonstrates a lack of experience and knowledge in
security practice. While a lack of knowledge is not a big deal (it is
fixable).

    Willful adherence to ignorance and bad practice is a 1st order danger to the success of a project or product\.

    Best practices specifically advise to compile with the highest level of warnings turned on\, treat warnings as ERRORS\, and only ship production code that runs without errors or warnings\.


    Comments from perl leadership\, established developers et al\, indicate they have no problem suddenly turning on new warnings that can cause established code in production to fail\.

    The effect of such a policy is to ensure that the shipped code in a language that can generate new warnings on any major update\, is to dissuade best security practices\.  Those responsible for such policies are ultimate responsible for a generally lowering of good security practice in those who use such code\.


    To advise that treating "warnings\-as\-errors"\, and to design and develop software that encourages bad security practice is contributing to to a lowering of security practice in the users of such\.  Indeed\, as evidence of this \, one look at how many CPAN projects build and install w/no warnings\.

As a core language developer wrote​:
"Indeed. One of the ramifications of "/warnings FATAL/"
is that programs which do not die on an older Perl may
die on a later Perl release due to new warnings being added. "

In other words, if one tries to follow best security practices - just to the
extent of using pragmas that make all warnings fatal , then due to the current
design and maintenance team's practices, programs & products in in the field
may die w/o notice (or deprecation cycle), w/a new release of perl.

Yes, and this is compliant with the CERT references you made.

You asked for fatal warnings.

You upgraded.

We started warning about something that we consider to be a likely problem.

Your code died.

Thus exactly what was supposed to happen happened.

You cant say "I want to be secure and treat all warnings as fatal" AND
simultaneously say "but I want to be able to upgrade without worrying
my program will die due to new warnings".

Incompatible programming changes were supposed to be preceeded by
announcements and a deprecation cycle -- with new "features (like new
warnings)" being activated only on an opt-in basis.

Nonsense. Where does it say that?

However, the current
maintenance team has shown they are unable to even follow this practice,
enabling new "warning features" w/o notice, and even activating warnings in
"valid code", processing "valid data" in a misguided example of paternalism.

You *vulneteered* for this paternalism by enabling fatal warnings.

You specifically asked for it.

And you did it for the same reason we started producing warnings.

Ergo you are whining about getting what you asked for.

That isn't nice.

The current maintenance team is obviously free to continue along their path
and likly will continue to ignore any input contrary to what they want to hear,
including citations of software & security "best practices". They main
continue to implement policies punish good software practice though this will
continue to have consequences for the future of this product.

Your position is totally inconsistent. You want to be secure, but you
are complaining about us warning that you are doing something that is
probably dodgy.

You simply cannot have it both ways.

And your attempt to turn this into an attack on the Perl development
team is not appreciated.

Yves

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @epa

Linda Walsh via RT <perlbug-followup <at> perl.org> writes​:

My background is kernel and security programming. It is common practice
to ensure the kernel compiles with no warnings.

That is certainly a good idea. But Perl's warnings are a mixture of
compile time and run time warnings. The kernel is not designed to panic
on all run time warnings - where possible it writes them to the log such
as 'dmesg' and continues.

In Perl, you can make many compile-time warnings fatal using

  use warnings FATAL => 'syntax';

--
Ed Avis <eda@​waniasset.com>

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From perl-diddler@tlinx.org

On Sun Jan 26 20​:10​:59 2014, demerphq wrote​:

In other words, if one tries to follow best security practices - just to
the extent of using pragmas that make all warnings fatal , then due to the
current design and maintenance team's practices, programs & products in in
the field may die w/o notice (or deprecation cycle), w/a new release of
perl.

Yes, and this is compliant with the CERT references you made.

You asked for fatal warnings.


Elsewhere, though it didn't make my final draft, more than one source
says they consider making all warning 'fatal', to be part of best
practices -- though more indicate that such would be true only for new
code -- not old code due to the volume of needed changes. If you want
I can include multiple references for this not being "simply my opinion",
but the opinion of, usually more senior developers, with it being less
popular the with less experience.

You upgraded.


  When I move up from successive version of my code in gcc, I don't
expect the code to die later on in execution.

  Part of the problem here -- that makes responsibilities in perl
different, is that, unlike upgrading a C compiler, when perl is upgraded
everything is affected. If one upgrades a C compiler, the pre-existing
binaries still run w/o problems presuming they are statically linked
or linked with versioned libraries that are not deleted on upgrade.

We started warning about something that we consider to be a likely
problem.


  Without giving advance warning as procedures regarding non-compatible
changes indicate should be done.

Your code died.

Thus exactly what was supposed to happen happened.


  Well, this is were communication has broken down. 1) lexical $_
was not experimental in 5.16. It was downgraded to experimental status
(where it can be removed w/o warning) -- without any advance notice
that this would be done.

  While some experiemental features are new -- some of the features that
that are now generating warnings are ones that have been around for 4-5 years.

  It was my believe that if one introduced an experimental feature in
a released product -- and it stayed stable into the next major release --
the experimental period had expired. If I'd reread all of the documentation
with each new release, I would have found that not to be the case, but most
people don't have time to re-read all of the documentation and look for changes
as well as statements about "what to avoid in production code (e.g. experimental
features) to be re-emphasized in each major release that they remain experimental.

  What is the point of releasing new features, but keeping them as experimental --
who does that benefit -- certainly not the perl community as a whole, who reads
books and other documentation that talks about the new features and that such
features were introduced 2, 4, 6 or 8 major versions ago as experimental features.

  If I'd been aware such new features -- often released with much fanfare as
solving long-existing problems in perl, would be never released but remain
experimental for years, I would have suggested that they either be made
features or removed long ago -- as they are useless baggage that contribute
to the instability and dead weight of the language.

  Some of the features that others don't like about those features are ones
that I DO like and do find useful. The decision making process about what is
good or bad in features is tailored to exclude any input that ranges too
far from what is considered acceptable.

You cant say "I want to be secure and treat all warnings as fatal" AND
simultaneously say "but I want to be able to upgrade without worrying
my program will die due to new warnings".


  If a security relevent bug is found I'd expect an error, not a warning.

  Second guessing what previously working code does at run time is, IMO,
far worse than the sin of forcing new features on users who who do not
opt in. The new warnings about experiemental features is a new feature
that was forced on users -- even though perl rules and guidlines are specific
about not introducing such execution affecting features without the user
opting in with a "use feature experimental_warnings" or a use "5.18" that
usually turns on such new features.

Incompatible programming changes were supposed to be preceeded by
announcements and a deprecation cycle -- with new "features (like new
warnings)" being activated only on an opt-in basis.

Nonsense. Where does it say that?


  http​://perldoc.perl.org/perlpolicy.html​:

  Any language change which breaks backward-compatibility should be able to be
  enabled or disabled lexically. ****Unless code at a given scope declares that it
  wants the new behavior***, that new behavior should be disabled<<<<.
  Which backward-incompatible changes
  *** are controlled *implicitly* by a 'use v5.x.y'**** is a decision which
  should be made by the pumpking in consultation with the
  community.

  Whether or not the new experimental warnings would be inplicit with
a "use 5.18.0" would be Ricardo's decision in consultation with the
community (which p5p -- isn't representative of -- it's a closed,
self-priviledged design group who promotes the idea that they should
be held special for being **able** to decide what goes into perl --
and try to emphasize their "volunteer status"... Unfortunately,
many volunteers (and I know I'm not the only one) are not only
NOT allowed to volunteer (anything more than ill received bug
reports or SHAT upon CPAN modules that get rated on political
and popularity issues rather than how well they do what they purport
to do), but are harrassed and excluded from input in such decisions.

Regardless, that would only apply to new features made automatic/implicit
by via an "use 5.18.+". Forcing incompatible changes on users who
don't opt in, goes against perl policy.

That you and others don't know this is **obviously**, part of the
problem. It's no coincidence that someone who does know the policy
is not even allowed input into the self-ascribe decision-makers list.

However, the current maintenance team has shown they are unable to even
follow this practice, enabling new "warning features" w/o notice, and even
activating warnings in "valid code", processing "valid data" in a misguided
example of paternalism.

You *vulneteered* for this paternalism by enabling fatal warnings.


  Nope. I wasn't aware perl monitored what what appropriate data
format -- EXCEPT for the UTF-8 debacle.

You specifically asked for it.


  I ask for warnings that happen when I run and develop the program --
not years later when processing 3rd party data and using the linux file
system according to specification.

And you did it for the same reason we started producing warnings.

Ergo you are whining about getting what you asked for.


  I don't ask for warnings about *valid* data that some find
questionable. I ask for warnings about things I can fix
at compile time or in a fixed test cycle. Popping up warnings
that are about code is what I turn on warnings for.

Warning me about my file names or data formats are not something
a mature language does. You don't see "C" or "C++" giving such
warnings. OR, if they do, they happen at compile time and refer
to a possible run-on string inside a quote.

In my case, a rather sophisticated and cool library (Mojolicious
passed back the file name in question as it was exactly what
was in the HTML page.

You have NO browsers that I know of that will flag such a pathname
or link as a warning. If nothing else, they "auto-strip" a trailing
LF or space as perl does on the open call when you use the 2-arg
form of open. Perl knows how to handle trailing white space on
filenames and does so automatically with the 2-arg format -- it has
worked that way as long as I've been aware of the issue -- and
I've been using perl for over 20 years.

The stat call has never had such a bug. Again-- a new 'feature' has
been implemented "checking for newline at the end of a file name" now
throws an error if the file doesn't exist. Sort of a wishy-washy feature,
but again -- didn't tell it to check my filenames and there is no
documentation (despite documenting the warning -- the filename
santitation feature isn't documented with the stat call or anywhere
else that I know of).

That isn't nice.


  Neither is turning on undocumented data-checks at run time.

  Perl DOES document data checks on file names in the open
call under certain circumstances. Any other checking is a bug.

  Just because Intel documented wrong behavior of their math
unit in the pentium chip -- didn't mean it wasn't still a bug.

The current maintenance team is obviously free to continue along their path
and likly will continue to ignore any input contrary to what they want to
hear, including citations of software & security "best practices". They
main continue to implement policies punish good software practice though
this will continue to have consequences for the future of this product.

Your position is totally inconsistent. You want to be secure, but you are
complaining about us warning that you are doing something that is probably
dodgy.


  You are repeating yourself. The differences are​:

  1) warning me about code vs.warning me about data that may never be hit, but
should be handled as documented (as in open)
  2) not turning on new experiemental warning features without code buy-in as
per perl-policy.
  3) keeping features in experimental status beyond 1 cycle -- either make them
features or remove them -- but always document anything experiment in major
release notes so they can be viewed in perl5XYdelta so people can know what to
avoid.
  4) Stop using released perl as your playground. When I wanted a new feature,
I was told CPAN was the place for it. But if I am part of the "in clique",
I can have it as an experiemental feature. Don't expect any gratitude or
respect for such "volunteerism" -- it pollutes the code.
  5) changing listed features from non-experimental status to experimental
status w/o deprecation and making them subject to "new features like
"experimental warnings".

You simply cannot have it both ways.


  See above.

And your attempt to turn this into an attack on the Perl development team is
not appreciated.


  An attack? please. I was very careful to NOT make it one. That you
take any criticism or difference of opinion as an attack seems to be
trying to provoke or escalate this into one. My response was
researched and multiply revised so as to not be attacking. However,
some people are predisposed to take anything as an attack and look for
any excuse to go on the offensive.

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @epa

Linda Walsh via RT <perlbug-followup <at> perl.org> writes​:

Elsewhere, though it didn't make my final draft, more than one source
says they consider making all warning 'fatal', to be part of best
practices -- though more indicate that such would be true only for new
code -- not old code due to the volume of needed changes. If you want
I can include multiple references for this not being "simply my opinion",

I think it would be a good idea to find sources for this advice, which in
my opinion is not a best practice at all. Then we can find what the reasoning
is and, I expect, find out the different assumptions on one side or the other
and perhaps reach some consensus.

--
Ed Avis <eda@​waniasset.com>

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @iabyn

On Mon, Jan 27, 2014 at 03​:24​:54AM -0800, Linda Walsh via RT wrote​:

Part of the problem here \-\- that makes responsibilities in perl

different, is that, unlike upgrading a C compiler, when perl is upgraded
everything is affected. If one upgrades a C compiler, the pre-existing
binaries still run w/o problems presuming they are statically linked
or linked with versioned libraries that are not deleted on upgrade.

I hope you realise that gcc and perl are two *completely* different
situations? gcc is compile-time, and (mostly) only generates compile-time
warnings. Perl both compiles and runs a program, and can be asked to
produce run-time warnings.

Upgrading gcc can quite possibly introduce new compile-time warnings on
your existing code base. Which you would need to fix *before* deploying.
Upgrading perl can introduce new run-time warnings. The release notes
document what new warnings have been introduced. It is your
responsibility to decide whether you wish to continue you code running
with all warnings turned fatal.

We started warning about something that we consider to be a likely
problem.
----
Without giving advance warning as procedures regarding non-compatible
changes indicate should be done.

The "Unsuccessful %s on filename containing newline" warning has been
there since perl 5.000 back in 1994. So by enabling FATAL => all warnings,
*you* took responsibility for ensuring that your code should never
generate this warning. The fact that it did indicates a deficiency in
your code.

I don't ask for warnings about \*valid\* data that some find

questionable. I ask for warnings about things I can fix
at compile time or in a fixed test cycle. Popping up warnings
that are about code is what I turn on warnings for.

Then only turn on those categories of warnings.

Warning me about my file names or data formats are not something
a mature language does. You don't see "C" or "C++" giving such
warnings. OR, if they do, they happen at compile time and refer
to a possible run-on string inside a quote.

Because perl isn't C. You are criticising something that is in the very
nature and syle of perl.

You have NO browsers that I know of that will flag such a pathname
or link as a warning.

Perl isn't a browser.

If nothing else, they "auto-strip" a trailing
LF or space as perl does on the open call when you use the 2-arg
form of open. Perl knows how to handle trailing white space on
filenames and does so automatically with the 2-arg format -- it has
worked that way as long as I've been aware of the issue -- and
I've been using perl for over 20 years.

Then you'll know that for 19 years perl has warned on stats and similar
operations on filenames ending in \n. And it does it for a very good
reason. Because lots of people wrote (and still write) code like

  while (my $file = <>) {
  if (-e $file) ...

without the necessary chomp, then wonder why their code doesn't work.

The stat call has never had such a bug. Again-- a new 'feature' has
been implemented "checking for newline at the end of a file name" now
throws an error if the file doesn't exist.

Its not a new feature, and it doesn't throw an error unless you ask it to.

Sort of a wishy-washy feature,

If you don't want warnings on wishy-washy features, then don't enable
those categories of warnings.

but again -- didn't tell it to check my filenames and there is no
documentation (despite documenting the warning -- the filename
santitation feature isn't documented with the stat call or anywhere
else that I know of).

It's documented in perldiag. Which of course you've perused in fine
detail, since you've turned on FATAL warnings for every warning, and have
therefore committed to being aware every warning documented therein.

--
"Emacs isn't a bad OS once you get used to it.
It just lacks a decent editor."

@p5pRT
Copy link
Author

p5pRT commented Jan 27, 2014

From @rjbs

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-27T06​:24​:54]

Any language change which breaks backward-compatibility should be able to be
enabled or disabled lexically. ****Unless code at a given scope declares that it
wants the new behavior***, that new behavior should be disabled<<<<.

I do not consider introducing new warnings to be breaking backward
compatibility in code that has specifically requested that all warnings be
turned on. Instead, it is forward compatibility​: your code does not need to be
altered to get new warnings.

Turning on fatal warnings should be done selectively, in Perl, both to make
upgrading safer and because some fatal compile time warnings are, last I
checked, still prone to causing weird error reporting. Personally, I tend to
select no fatal warnings, but that's certainly a matter of preference.

At any rate, we have now left the realm of the ticket's initial report.
Backward compatibility is not the subject, as the newline-in-statted-filename
warning dates to the big 5.0.

Whether or not the new experimental warnings would be inplicit with

a "use 5.18.0" would be Ricardo's decision in consultation with the
community (which p5p -- isn't representative of -- it's a closed,
self-priviledged design group who promotes the idea that they should
be held special for being **able** to decide what goes into perl --
and try to emphasize their "volunteer status"...

p5p is not closed. It is an open-subscription mailing list with public
archives.

Unfortunately, many volunteers (and I know I'm not the only one) are not only
NOT allowed to volunteer (anything more than ill received bug reports or SHAT
upon CPAN modules that get rated on political and popularity issues rather
than how well they do what they purport to do), but are harrassed and
excluded from input in such decisions.

You are, to the best of my knowledge, the only person who has been more or less
permanently removed from perl5-porters, at least in the last five years when
I've been heavily involved.

Warning me about my file names or data formats are not something
a mature language does. You don't see "C" or "C++" giving such
warnings. OR, if they do, they happen at compile time and refer
to a possible run-on string inside a quote.

This is a bizarre criticism. Perl is not C. The same restrictions and design
priorities need not apply, and in some cases, ought not to apply.

The stat call has never had such a bug. Again-- a new 'feature' has
been implemented "checking for newline at the end of a file name" now
throws an error if the file doesn't exist.

Again, not so.

--
rjbs

@p5pRT
Copy link
Author

p5pRT commented Jan 28, 2014

From perl-diddler@tlinx.org

On Mon Jan 27 07​:31​:11 2014, perl.p5p@​rjbs.manxome.org wrote​:

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-27T06​:24​:54]

Any language change which breaks backward-compatibility should be
able to be
enabled or disabled lexically. ****Unless code at a given scope
declares that it
wants the new behavior***, that new behavior should be disabled<<<<.

I do not consider introducing new warnings to be breaking backward
compatibility in code that has specifically requested that all warnings be
turned on. Instead, it is forward compatibility​: your code does not need to
be altered to get new warnings.


  New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

At any rate, we have now left the realm of the ticket's initial
report.
Backward compatibility is not the subject, as the newline-in-statted-
filename
warning dates to the big 5.0.


  Fine... show me where the ***behavior*** is documented.
  I.e. where is it said perl will issue a diagnostic if "Xthis
  " condition is true?

  I assert the behavior is undocumented -- the error is documented,
but that happens after the code has been designed, written and deployed.

  I would assert that hidden and undocumented behaviors who's only
existance can be gleaned by reading error and warning messages, is a
bug. The warning messages document the irrelevant error message, but
where it is documented as to what data constructs will emit surprise
warnings in the field?

Whether or not the new experimental warnings would be inplicit with
a "use 5.18.0" would be Ricardo's decision in consultation with the
community (which p5p -- isn't representative of -- it's a closed,
self-priviledged design group who promotes the idea that they should
be held special for being **able** to decide what goes into perl --
and try to emphasize their "volunteer status"...

p5p is not closed. It is an open-subscription mailing list with
public archives.


Your subscription and posting process is filtered. That
doesn't qualify for 'open' no matter how selectively you claim to
filter.

You are, to the best of my knowledge, the only person who has been
more or less
permanently removed from perl5-porters, at least in the last five
years when
I've been heavily involved.


  Interesting. I only started trying to use perl as a programming
language about 3-4 years ago (vs. a scripting language).

Warning me about my file names or data formats are not something
a mature language does. You don't see "C" or "C++" giving such
warnings. OR, if they do, they happen at compile time and refer
to a possible run-on string inside a quote.

This is a bizarre criticism. Perl is not C. The same restrictions
and design
priorities need not apply, and in some cases, ought not to apply.


  Yes, again, you point out the error in my thinking -- trying to use
perl as a programming language and not a scripting language. I have had
experience in compilers at some point in my history. I suppose it is
of little surprise that I've found the best language design
principles in the perlregex engine, with most of my clashes having been
where perl has special quirks enabled to create problems or where it
doesn't follow programming language design principles that I've been
trying to use it for.

The stat call has never had such a bug. Again-- a new 'feature' has
been implemented "checking for newline at the end of a file name" now
throws an error if the file doesn't exist.


  Maybe not, but show me the documentation that says perl will have such
a behavior with some data (and where it defines all the special cases
that are implemented for each command). Undocumented behavior is still
a bug. That some warning message is issued when some data is in a particular
state, can be looked up AFTER the fact. But where is this behavior
documented?

@p5pRT
Copy link
Author

p5pRT commented Jan 28, 2014

From @demerphq

On 28 January 2014 19​:52, Linda Walsh via RT <perlbug-followup@​perl.org> wrote​:

On Mon Jan 27 07​:31​:11 2014, perl.p5p@​rjbs.manxome.org wrote​:

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-27T06​:24​:54]

Any language change which breaks backward-compatibility should be
able to be
enabled or disabled lexically. ****Unless code at a given scope
declares that it
wants the new behavior***, that new behavior should be disabled<<<<.

I do not consider introducing new warnings to be breaking backward
compatibility in code that has specifically requested that all warnings be
turned on. Instead, it is forward compatibility​: your code does not need to
be altered to get new warnings.
----
New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

No. This has been debated before and we decided to error on the side
of caution and not require this. Warnings are there to enable
programmers to catch things they weren't aware of. As far as I know we
do our best to only introduce new warnings at a major release, but we
do not require anybody to enable anything.

We might one day change this, but it wont be because of "use warnings 'FATAL'".

At any rate, we have now left the realm of the ticket's initial
report.
Backward compatibility is not the subject, as the newline-in-statted-
filename
warning dates to the big 5.0.
------

    Fine\.\.\. show me where the \*\*\*behavior\*\*\* is documented\.
            I\.e\. where is it said perl will issue a diagnostic if "Xthis
    " condition is true?

    I assert the behavior is undocumented \-\- the error is documented\,

but that happens after the code has been designed, written and deployed.

We document the behavior that can trigger a warning in perldiag. Here
is the entry for the case you are concerned about​:

  Unsuccessful %s on filename containing newline
  (W newline) A file operation was attempted on a filename, and that
  operation failed, PROBABLY because the filename contained a
  newline, PROBABLY because you forgot to chomp() it off. See
  "chomp" in perlfunc.

If you have not reviewed the complete list of warnings in perldiag and
understood the explanation provided for each one yet you have used
fatal warnings then it is your own responsibility and not ours.

    I would assert that hidden and undocumented behaviors who's only

existance can be gleaned by reading error and warning messages, is a
bug. The warning messages document the irrelevant error message, but
where it is documented as to what data constructs will emit surprise
warnings in the field?

In perldiag.

Whether or not the new experimental warnings would be inplicit with
a "use 5.18.0" would be Ricardo's decision in consultation with the
community (which p5p -- isn't representative of -- it's a closed,
self-priviledged design group who promotes the idea that they should
be held special for being **able** to decide what goes into perl --
and try to emphasize their "volunteer status"...

p5p is not closed. It is an open-subscription mailing list with
public archives.
---
Your subscription and posting process is filtered. That
doesn't qualify for 'open' no matter how selectively you claim to
filter.

Sure it does. We filter spam. And because some people refuse to play
nice on the list we filter some people too.

The default however is to allow people to post to the list.

And anything posted to the list is public.

This is what "open" means.

You are, to the best of my knowledge, the only person who has been
more or less
permanently removed from perl5-porters, at least in the last five
years when
I've been heavily involved.
----
Interesting. I only started trying to use perl as a programming
language about 3-4 years ago (vs. a scripting language).

There is no such distinction. A program is a program, be it compiled,
or interpreted or something in between. And the language it is written
in is a programming language. There is no such thing formally as a
"scripting language". That is an informal distinction that is made by
some based on the use the language is put to. It has nothing to do
with the design of the language per se, although traditionally one
does not include strongly type languages in the "scripting" bucket.
Nevertheless the distinction is entirely an informal one.

Warning me about my file names or data formats are not something
a mature language does. You don't see "C" or "C++" giving such
warnings. OR, if they do, they happen at compile time and refer
to a possible run-on string inside a quote.

This is a bizarre criticism. Perl is not C. The same restrictions
and design
priorities need not apply, and in some cases, ought not to apply.
---

    Yes\, again\, you point out the error in my thinking \-\- trying to use

perl as a programming language and not a scripting language.

As I said above there is no such distinction. There are many other
distinctions between C and Perl, but "programming language" and
"scripting language" is not one. Both are programming languages.

I have had
experience in compilers at some point in my history. I suppose it is
of little surprise that I've found the best language design
principles in the perlregex engine, with most of my clashes having been
where perl has special quirks enabled to create problems or where it
doesn't follow programming language design principles that I've been
trying to use it for.

So basically you are saying that you learned language X and then used
Perl and then got surprised when it worked differently.

Well *duh*, what did you think would happen? Do you think all C
compilers produce the same diagnostics? They don't. Often they dont
even support the same dialects of the language. Do you think that
Fortran and C compilers produce the same diagnostics? What about
Prolog? Or Scheme? Or Pascal?

Every language, computer and human, has its quirks. You cannot expect
them to behave the same way, and indeed it makes no sense whatsoever
to do so. If they were the same then they would be same. :-)

The stat call has never had such a bug. Again-- a new 'feature' has
been implemented "checking for newline at the end of a file name" now
throws an error if the file doesn't exist.
----
Maybe not, but show me the documentation that says perl will have such
a behavior with some data (and where it defines all the special cases
that are implemented for each command). Undocumented behavior is still
a bug. That some warning message is issued when some data is in a particular
state, can be looked up AFTER the fact. But where is this behavior
documented?

In perldiag. Every warnings and error that Perl can throw is
documented there, with the causes for the warning.

If there is a real bug worth talking about in this thread is that we
do not properly spell out the impact of FATAL (or if we do I cant find
it).

I will look into doing a patch to make it absolutely clear that FATAL
warnings are not forward compatible.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 28, 2014

From perl-diddler@tlinx.org

On Tue Jan 28 09​:07​:35 2014, demerphq wrote​:

On 28 January 2014 19​:52, Linda Walsh via RT <perlbug-
followup@​perl.org> wrote​:

New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

No. This has been debated before and we decided to error on the side
of caution and not require this. Warnings are there to enable
programmers to catch things they weren't aware of. As far as I know we
do our best to only introduce new warnings at a major release, but we
do not require anybody to enable anything.


  Here's where your logic is flawed. When a distro is upgraded, it is
  done by end-users, not programmers. You don't want all the sysadmin
  scripts on all systems to start failing due to some new incompatible
  change.

  When you do want it is when the *author* (programmer) is *in* the code
  changing it. When they are working on it and enable the new features of
  a given level -- THEN you expose the new warnings -- because at that time
  the *author* is in the code and is editing/changing it -- now you have their
  attention and it would be appropriate to expose such errors.

  But whatever the new feature, enabling it when on a new installation when it
  is installed by end -usefrs who won't know what to do with it -- or why there
  system isn't functioning the way it should -- that is exactly the wrong time
  to display such messages.

  If you want the warnings directed to programmers, as you claim -- then turn on
  the warnings when you know it is the(a) programmer who is making the changes.

  That's not what you are doing though -- you aren't toggling it on when the
  author is around (they may be on vacation or whatever) -- you are turning it on
  because someone thought perl 5.18 would be backward compat with 5.16 and it
  should be safe to upgrade --- instead you are letting the user know something
  they don't understand.

  Reallistically, you believe it best to turn on new warnings that can cause code
  to fail when unrelated people upgrade perl on some system due to a distro
  upgrade?

  Why wouldn't you do it when you know an author is making changes to use new
  features in the version?

  The 1st way causes maximum disruption to unrelated users and unsuspecting authors.
  The 2nd way targets the warnings at those who are working in the code and doing
  changes there.

  Why is causing potential harm to unrelated people considered preferable over
  targeting at the author?

@p5pRT
Copy link
Author

p5pRT commented Jan 28, 2014

From @demerphq

On 29 January 2014 01​:29, Linda Walsh via RT <perlbug-followup@​perl.org> wrote​:

On Tue Jan 28 09​:07​:35 2014, demerphq wrote​:

On 28 January 2014 19​:52, Linda Walsh via RT <perlbug-
followup@​perl.org> wrote​:

New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

No. This has been debated before and we decided to error on the side
of caution and not require this. Warnings are there to enable
programmers to catch things they weren't aware of. As far as I know we
do our best to only introduce new warnings at a major release, but we
do not require anybody to enable anything.
----
Here's where your logic is flawed. When a distro is upgraded, it is
done by end-users, not programmers. You don't want all the sysadmin
scripts on all systems to start failing due to some new incompatible
change.

If sysadmin are running their scripts with warnings FATAL then
presumably that is *exactly* what they want.

And if you dont want to be affected by this then install a second perl
that is stable and that the sysadmins wont mess with.

The latter is actually the general recommendation for how to run
business logic on perl given the prevalence of perl in sysadmin
tooling.

    When you do want it is when the \*author\* \(programmer\) is \*in\* the code
    changing it\.  When they are working on it and enable the new features of
    a given level \-\- THEN you expose the new warnings \-\- because at that time
    the \*author\* is in the code and is editing/changing it \-\- now you have their
    attention and it would be appropriate to expose such errors\.

    But whatever the new feature\, enabling it when on a new installation when it
    is installed by end \-usefrs who won't know what to do with it \-\- or why there
    system isn't functioning the way it should \-\- that is exactly the wrong time
    to display such messages\.

    If you want the warnings directed to programmers\, as you claim \-\- then turn on
    the warnings when you know it is the\(a\) programmer who is making the changes\.

You seem like a clever person, albeit somewhat misguided at times. I
suggest you think about that sentence for a while, and then realize we
dont have sentient computers yet, so having Perl who is making changes
to code it runs is pretty much completely out of the question.

    That's not what you are doing though \-\- you aren't toggling it on when the
    author is around \(they may be on vacation or whatever\) \-\- you are turning it on
    because someone thought perl 5\.18 would be backward compat with 5\.16 and it
    should be safe to upgrade \-\-\- instead you are letting the user know something
    they don't understand\.

    Reallistically\, you believe it best to turn on new warnings that can cause code
    to fail when unrelated people upgrade perl on some system due to a distro
    upgrade?

Yes I do. I/we do not recommend people write critical business logic
using the system perl. There is too much danger that a module upgrade
by the sysadmins, or a module upgrade by the programmer can break
systems.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 28, 2014

From perl5-porters@perl.org

Yves Orton wrote​:

If there is a real bug worth talking about in this thread is that we
do not properly spell out the impact of FATAL (or if we do I cant find
it).

I find that statement surprising, considering you are the author of
commit 5e0ced9.

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From @demerphq

On 29 January 2014 04​:25, Father Chrysostomos <sprout@​cpan.org> wrote​:

Yves Orton wrote​:

If there is a real bug worth talking about in this thread is that we
do not properly spell out the impact of FATAL (or if we do I cant find
it).

I find that statement surprising, considering you are the author of
commit 5e0ced9.

Bah. I thought I had done that too, but when I couldn't find it
thought I misremembered. I blame jet-lag and a typo, instead of
looking at bleads pod I checked my older system perl. :-(

Anyway, thanks.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From perl-diddler@tlinx.org

On Tue Jan 28 09​:38​:22 2014, demerphq wrote​:

followup@​perl.org> wrote​:

New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

No. This has been debated before and we decided to error on the side
of caution and not require this. Warnings are there to enable
programmers to catch things they weren't aware of. As far as I know
we
do our best to only introduce new warnings at a major release, but
we
do not require anybody to enable anything.
----
Here's where your logic is flawed. When a distro is upgraded, it is
done by end-users, not programmers. You don't want all the sysadmin
scripts on all systems to start failing due to some new incompatible
change.

If sysadmin are running their scripts with warnings FATAL then
presumably that is *exactly* what they want.


  I wouldn't want my log-rotations, my file serving, my email ... and
several other things to fail when I upgrade my distro.

  I want to be able to have a "grace period" where I can run the new
perl and turn on "use feature "new warnings" or "use
5.newversion" in 1 program at a time. I can't easily, always upgrade
1 program myself -- especially perl, without causing problems.

  Open suse, if you remember (which used to be the best, but have
become more irresponsible that even those who turn on new warnings!
;^/) by locking in all the perl-containing RPM's in a distro to
1 specific revision of perl.

They also -- depending on the module, hard code the library paths for
that perl into modules.

So all perl rpms, minimally will have deps on perl-5.18.2 -- where an
upgrade to 5.18.3 will be incompat with all previous installed
packages. Instead of loading perl, python and ruby dynamically at run
time in [g]'vim', they link them at build time -- meaning if you
change any of those languages, your primary editor won't function.
I even patched their RPM to build w/delayed loading so that if any of
the langs wasn't installed or had problems, you could at least "edit"
and fix the problems. The patch was discarded. They want to lock
everything together. The binaries have specific versions of libc --
and their libc has Suse-only additions, so it's not easy to substitute
a glibc that you've built.

1.5-2 years ago with the advent of 12.1, I found I could no longer
build perl on my system. That state persisted 1.5 years until
I found the source of the problem in the gnu db drivers/source code.
If they are used (and they are in perl) and if your optimal write size
on your build or execution disk is not a power of 2 (multiple, isn't
sufficient), then the db-library fails (as it has clever code that
requires a power of 2 that it uses for the db record size. But anyone
with an a raid with a width of 3 data disks (or 5, 6, 7, 9, 10,11...)
would build a perl that wouldn't pass the tests and would have a perl
(if build elsewhere) that wouldn't work with any of the DB routines --
a fairly severe problem. Of course both suse and the perl folks
pointed at gnu, rather than ensuring the needs of their customers (or
community) came first. The bug was reported, but the gnu person
taking the bug didn't seem very interested in fixing the bug.

That being the case, I was rather at the mercy of whenever my distro
uppdated -- as they drop support for versions >~1.5 years back.

While I do alot of my own automation with perl, -- not being able to
build even with their RPM's means I don't always have an easy "say so"
about when I upgrade perl. With many of their packages being version
locked together, upgrades become an all-or-nothing affair.

Understand, that unlike MOST users, I am able to, *usually* rebuild
most of the tools I use, so at least I have a chance to develop a
workaround. Many users won't have that ability and will have to
upgrade when their distro does or risk being left behind.

And if you dont want to be affected by this then install a second perl
that is stable and that the sysadmins wont mess with.


  Considering I need the default perl for myself and root and some
other daemons to be the same -- it would make mroe sense to if suse
moved their distro location into a private dir like windows reserves
the /windows dir for their own use.

  Too many programs have /bin/bash and/or /usr/bin/perl in their
expectations (more the former than the latter). To create perl's
that will cause scores of scripts to break adds to a user's upgrade
pain (since I've used suse more for a server system, but they are
focusing more on laptops and "appliances", its getting harder to keep
a fit -- though other distro's have their own schedules and quirks.

Anything that perl developers can do to make things easier on users
when they first start a new perl is a good thing. Defaulting to tons
of warnings on things that, in many cases, should have already been
permanent features, isn't the type of abuse I would choose to inflict
on a user base. But I've noticed that more developers are programming
and writing programs for themselves, and users can go *** themselves.
*ouch*. Many developers take the attitude that users should consider
themselves lucky no matter what gets dumped on them.

That isn't what computers were created for -- they were supposed to
adapt to the users -- not the other way around.

The latter is actually the general recommendation for how to run
business logic on perl given the prevalence of perl in sysadmin
tooling.
===
  I've not seen any systems implemented that way in practice -- partly
because in business situations, often those who update the computers
w/new versions operate on their own schedule. Most importantly,
though, it happens asynchronously (or orthogonally) to what authors
are going at some given time. Authors often find out what is going
down and work to stall until changes can be made, but that's often
not a luxury they have as they may be required to spend most or all of
their time on other projects. It's not convenient for me to go back
and rewrite all my scripts right now -- it could take several months.

Fortunately, I figured out the build problem and worked around it
(dumped my main devel partition and rebuilt it with a 256k strip-size
(or at least telling the OS that -- it's really "3" 256k stripes,
being a RAID 50, BUT, one can make due with slower read/write
performance in order to have a working system.

If you want the warnings directed to programmers, as you claim --
then turn on
the warnings when you know it is the(a) programmer who is making the
changes.

You seem like a clever person, albeit somewhat misguided at times. I
suggest you think about that sentence for a while, and then realize we
dont have sentient computers yet, so having Perl who is making changes
to code it runs is pretty much completely out of the question.


  ??? I think you missed my clever idea. Perl wouldn't change the
code. It would replace the previous version of itself with minimal
run-time change requirements so those who just "run" things and
install perl as part of a new distro, can still do that without being
hit with many warnings and/or errors.

  When authors/designers/programmers get into the code and want
to "use 5.18.x" in order to enable some new feature -- THAT is when
the warnings get enabled -- when someone is working on the code and
includes a "use 5.18.x" -- something those who are just users would
not do.

  It's not perl that makes the decision. By being "clever", you
make it such that obnoxious new warnings only crop up for those
who upgrade the code by adding "use 5.18.x".

  That's why you only add new features on an opt-in basis -- so
those who don't opt-in and can't program aren't affected by
perl changing on a system that has many perl scripts.

  Thinking of the user is of prime importance for me. I'm shocked
at the attitudes and policies I see implemented these days --
extremely selfish to the point of being user-hostile. One guy
claimed that environment was a "Do-acracy" -- those who "do"
get to run things. That's fine until they break previous programs
and until it is pointed out to them that they are ***priviledged***
to be working on the code (unlike some who think volunteering, say,
to be president should get them "gratitude from the masses"​: Nep).

  The president **gets** to serve. His satisfaction comes from
the fact that he has the power and priviledge to change things --
the idea that people should have gratitude toward those who *TAKE*
power and lock others out (whether its banning their attendence
or confinging them to "free speech zones"), is ego-centric to the
point that they are often deluded into believing their own story.
Surrounding themselves with like minds, they can even feel warm and
fuzzy about their actions. If they "act" all
humble and and put on a humorous character, many will believe them
to be a great leader -- like many think of Reagan -- as he
implemented his "money-first" policies.

Reallistically, you believe it best to turn on new warnings that can
cause code
to fail when unrelated people upgrade perl on some system due to a
distro
upgrade?

Yes I do. I/we do not recommend people write critical business logic
using the system perl. There is too much danger that a module upgrade
by the sysadmins, or a module upgrade by the programmer can break
systems.


  Sadly, Ricardo agrees with you -- pointing very differing priorities
that are in place for perl vs. other languages. While Java is a
a scripted language, it has found a place in business and increasingly
python is as well. That perl is shut out of that galls me no end --
which has been one of the major reasons I've pushed for the changes
I have (like this instance of not turning on warnings during an
upgrade).

  I would have preferred perl be upgraded to being less quirky and
more stable, but have been fought on every issue I've raised - with
others getting irritated with me because I am so incredulous that they
would want to maintain the status quo -- which will continue to
cause perl's mindshare to decline.

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From @demerphq

On 29 January 2014 10​:53, Linda Walsh via RT <perlbug-followup@​perl.org> wrote​:

On Tue Jan 28 09​:38​:22 2014, demerphq wrote​:

followup@​perl.org> wrote​:

New warnings are an incompatible code change --- they change the
way old programs that function, work. As such, they should be
turned on only with "use 5.18" or use feature as perl perlpolicy.

No. This has been debated before and we decided to error on the side
of caution and not require this. Warnings are there to enable
programmers to catch things they weren't aware of. As far as I know
we
do our best to only introduce new warnings at a major release, but
we
do not require anybody to enable anything.
----
Here's where your logic is flawed. When a distro is upgraded, it is
done by end-users, not programmers. You don't want all the sysadmin
scripts on all systems to start failing due to some new incompatible
change.

If sysadmin are running their scripts with warnings FATAL then
presumably that is *exactly* what they want.
------

    I wouldn't want my log\-rotations\, my file serving\, my email \.\.\. and

several other things to fail when I upgrade my distro.

    I want to be able to have a "grace period" where I can run the new

perl and turn on "use feature "new warnings" or "use
5.newversion" in 1 program at a time. I can't easily, always upgrade
1 program myself -- especially perl, without causing problems.

    Open suse\, if you remember \(which used to be the best\, but have

become more irresponsible that even those who turn on new warnings!
;^/) by locking in all the perl-containing RPM's in a distro to
1 specific revision of perl.

They also -- depending on the module, hard code the library paths for
that perl into modules.

So all perl rpms, minimally will have deps on perl-5.18.2 -- where an
upgrade to 5.18.3 will be incompat with all previous installed
packages. Instead of loading perl, python and ruby dynamically at run
time in [g]'vim', they link them at build time -- meaning if you
change any of those languages, your primary editor won't function.
I even patched their RPM to build w/delayed loading so that if any of
the langs wasn't installed or had problems, you could at least "edit"
and fix the problems. The patch was discarded. They want to lock
everything together. The binaries have specific versions of libc --
and their libc has Suse-only additions, so it's not easy to substitute
a glibc that you've built.

1.5-2 years ago with the advent of 12.1, I found I could no longer
build perl on my system. That state persisted 1.5 years until
I found the source of the problem in the gnu db drivers/source code.
If they are used (and they are in perl) and if your optimal write size
on your build or execution disk is not a power of 2 (multiple, isn't
sufficient), then the db-library fails (as it has clever code that
requires a power of 2 that it uses for the db record size. But anyone
with an a raid with a width of 3 data disks (or 5, 6, 7, 9, 10,11...)
would build a perl that wouldn't pass the tests and would have a perl
(if build elsewhere) that wouldn't work with any of the DB routines --
a fairly severe problem. Of course both suse and the perl folks
pointed at gnu, rather than ensuring the needs of their customers (or
community) came first. The bug was reported, but the gnu person
taking the bug didn't seem very interested in fixing the bug.

That being the case, I was rather at the mercy of whenever my distro
uppdated -- as they drop support for versions >~1.5 years back.

While I do alot of my own automation with perl, -- not being able to
build even with their RPM's means I don't always have an easy "say so"
about when I upgrade perl. With many of their packages being version
locked together, upgrades become an all-or-nothing affair.

Understand, that unlike MOST users, I am able to, *usually* rebuild
most of the tools I use, so at least I have a chance to develop a
workaround. Many users won't have that ability and will have to
upgrade when their distro does or risk being left behind.

And if you dont want to be affected by this then install a second perl
that is stable and that the sysadmins wont mess with.
----
Considering I need the default perl for myself and root and some
other daemons to be the same -- it would make mroe sense to if suse
moved their distro location into a private dir like windows reserves
the /windows dir for their own use.

    Too many programs have /bin/bash and/or /usr/bin/perl in their

expectations (more the former than the latter). To create perl's
that will cause scores of scripts to break adds to a user's upgrade
pain (since I've used suse more for a server system, but they are
focusing more on laptops and "appliances", its getting harder to keep
a fit -- though other distro's have their own schedules and quirks.

Anything that perl developers can do to make things easier on users
when they first start a new perl is a good thing. Defaulting to tons
of warnings on things that, in many cases, should have already been
permanent features, isn't the type of abuse I would choose to inflict
on a user base. But I've noticed that more developers are programming
and writing programs for themselves, and users can go *** themselves.
*ouch*. Many developers take the attitude that users should consider
themselves lucky no matter what gets dumped on them.

That isn't what computers were created for -- they were supposed to
adapt to the users -- not the other way around.

The latter is actually the general recommendation for how to run
business logic on perl given the prevalence of perl in sysadmin
tooling.
===
I've not seen any systems implemented that way in practice -- partly
because in business situations, often those who update the computers
w/new versions operate on their own schedule. Most importantly,
though, it happens asynchronously (or orthogonally) to what authors
are going at some given time. Authors often find out what is going
down and work to stall until changes can be made, but that's often
not a luxury they have as they may be required to spend most or all of
their time on other projects. It's not convenient for me to go back
and rewrite all my scripts right now -- it could take several months.

Fortunately, I figured out the build problem and worked around it
(dumped my main devel partition and rebuilt it with a 256k strip-size
(or at least telling the OS that -- it's really "3" 256k stripes,
being a RAID 50, BUT, one can make due with slower read/write
performance in order to have a working system.

If you want the warnings directed to programmers, as you claim --
then turn on
the warnings when you know it is the(a) programmer who is making the
changes.

You seem like a clever person, albeit somewhat misguided at times. I
suggest you think about that sentence for a while, and then realize we
dont have sentient computers yet, so having Perl who is making changes
to code it runs is pretty much completely out of the question.
---
??? I think you missed my clever idea. Perl wouldn't change the
code. It would replace the previous version of itself with minimal
run-time change requirements so those who just "run" things and
install perl as part of a new distro, can still do that without being
hit with many warnings and/or errors.

    When authors/designers/programmers get into the code and want

to "use 5.18.x" in order to enable some new feature -- THAT is when
the warnings get enabled -- when someone is working on the code and
includes a "use 5.18.x" -- something those who are just users would
not do.

    It's not perl that makes the decision\.  By being "clever"\, you

make it such that obnoxious new warnings only crop up for those
who upgrade the code by adding "use 5.18.x".

    That's why you only add new features on an opt\-in basis \-\- so

those who don't opt-in and can't program aren't affected by
perl changing on a system that has many perl scripts.

    Thinking of the user is of prime importance for me\.  I'm shocked

at the attitudes and policies I see implemented these days --
extremely selfish to the point of being user-hostile. One guy
claimed that environment was a "Do-acracy" -- those who "do"
get to run things. That's fine until they break previous programs
and until it is pointed out to them that they are ***priviledged***
to be working on the code (unlike some who think volunteering, say,
to be president should get them "gratitude from the masses"​: Nep).

The president **gets** to serve. His satisfaction comes from
the fact that he has the power and priviledge to change things --
the idea that people should have gratitude toward those who *TAKE*
power and lock others out (whether its banning their attendence
or confinging them to "free speech zones"), is ego-centric to the
point that they are often deluded into believing their own story.
Surrounding themselves with like minds, they can even feel warm and
fuzzy about their actions. If they "act" all
humble and and put on a humorous character, many will believe them
to be a great leader -- like many think of Reagan -- as he
implemented his "money-first" policies.

Reallistically, you believe it best to turn on new warnings that can
cause code
to fail when unrelated people upgrade perl on some system due to a
distro
upgrade?

Yes I do. I/we do not recommend people write critical business logic
using the system perl. There is too much danger that a module upgrade
by the sysadmins, or a module upgrade by the programmer can break
systems.
----
Sadly, Ricardo agrees with you -- pointing very differing priorities
that are in place for perl vs. other languages. While Java is a
a scripted language, it has found a place in business and increasingly
python is as well. That perl is shut out of that galls me no end --
which has been one of the major reasons I've pushed for the changes
I have (like this instance of not turning on warnings during an
upgrade).

    I would have preferred perl be upgraded to being less quirky and

more stable, but have been fought on every issue I've raised - with
others getting irritated with me because I am so incredulous that they
would want to maintain the status quo -- which will continue to
cause perl's mindshare to decline.

Thanks Linda. Your position is noted.

We have already documented the following in perllexwarn​:

+B<NOTE​:> Users of FATAL warnings, especially those using C<FATAL => 'all'>
+should be fully aware that they are risking future portability of their
+programs by doing so. Perl makes absolutely no commitments to not
+introduce new warnings, or warnings categories in the future, and indeed
+we explicitly reserve the right to do so. Code that may not warn now may
+warn in a future release of Perl if the Perl5 development team deems it
+in the best interests of the community to do so. Should code using FATAL
+warnings break due to the introduction of a new warning we will NOT
+consider it an incompatible change. Users of FATAL warnings should take
+special caution during upgrades to check to see if their code triggers
+any new warnings and should pay particular attention to the fine print of
+the documentation of the features they use to ensure they do not exploit
+features that are documented as risky, deprecated, or unspecified, or where
+the documentation says "so don't do that", or anything with the same sense
+and spirit. Use of such features in combination with FATAL warnings is
+ENTIRELY AT THE USERS RISK.

We will update that note with a mention that all warnings and errors
and the circumstance under which they are triggered is documented in
perldiag.

And with that, we will close this ticket as "wont fix".

Thanks for your report.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From perl-diddler@tlinx.org

On Tue Jan 28 19​:03​:36 2014, demerphq wrote​:

Thanks Linda. Your position is noted.


  Doesn't anyone prune what they are responding to anymore?

We have already documented the following in perllexwarn​:


  perllexwarn *** feels like *** an appendix to me -- not
part of the "perl main text".

  My question to ricardo went un-answered.

  Where is this *behavior* of checking file names against
"some set" of rules documented?

  The fact that "open" will prune spaces in the 2 arg form is
documented under the "open" call.

  If this was supposed to be a feature, why isn't it documented
under the stat call the same way open documents its special
handling and under what cases it does so?

  Only documenting warnings and not the tests themselves makes
for undocumted features that will catch people by surprise.

  Usually "undocumented behavior" is considered a bug. That
an error message was added to the "errata" section doesn't mean
the feature is documented.

  I.e. if you want to keep this wart in perl, it should be documented
in 'stat' -- not just in an error+warning appendix.

  Are there other formats that also trigger warnings?

  What other commands have undocumented behaviors and I mean
documented where the command is documented -- not separately in an
error compendium?

  Or is documenting these quirks with each command they are implemented in
too much to ask?

+B<NOTE​:> Users of FATAL warnings, especially those using C<FATAL =>
'all'>
+should be fully aware that they are risking future portability of
their
+programs by doing so.


  Actually, if you don't want warnings spewed out all over the place,
it is safer to disable warnings entirely in production code.

  If I see a warning out of code, I think something is broken.

  If perl will randomly cause warning messages to appear in new
versions, I think the best advice would be to shut off warnings
entirely as they are not predictable. Of course in some cases
extraordinary measures will need to be taken to keep them off, since
as I understand it, simply keeping warnings disabled used to be
sufficient, but that was changed as well in this release and
warnings for the new category are forced on?

  I.e. "no warnings" needs to be at the front of programs to protect
against unwanted program output corruption?

  FWIW, I keep warnings on and fatal, as I don't want the program to
continue if there is something like an 'undef' that I didn't catch, as
that usually is a bad thing. Vs. the current crop of new warnings in
5.18, which only have an impact if someone was used to warnings meaning
that something went wrong. It is clear that use of warnings has been
expanded as a way to send messages to users rather than warn of bad
comparisons (integer against string or undefs). That's still, IMO,
not a good use of warnings.

(sorry for the reply, but I didn't see the question about
behavior documentation or what other commands are affected and/or what
other checks are done)...

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From @demerphq

On 29 January 2014 14​:34, Linda Walsh via RT <perlbug-followup@​perl.org> wrote​:

On Tue Jan 28 19​:03​:36 2014, demerphq wrote​:

Thanks Linda. Your position is noted.
----
Doesn't anyone prune what they are responding to anymore?

We have already documented the following in perllexwarn​:
---

    perllexwarn \*\*\* feels like \*\*\* an appendix to me \-\- not

part of the "perl main text".

perllexwarn is the *standard* documentation for warnings. I dont know
what you think part of the "perl main text" is, and frankly I don't
care. Everything in pod/ in the perl sources is part of our
documentation. How we choose to structure our documentation is our
business and you cannot discount some piece of documentation because
you do not like where it is located, or think it should be located
elsewhere.

    My question to ricardo went un\-answered\.

    Where is this \*behavior\* of checking file names against

"some set" of rules documented?

I answered this already. It is documented in perldiag.

  Unsuccessful %s on filename containing newline
  (W newline) A file operation was attempted on a filename,
and that operation failed, PROBABLY because the filename
  contained a newline, PROBABLY because you forgot to chomp()
it off. See "chomp" in perlfunc.

That specifies that there is a warning "Unsuccessful %s on filename
containing newline", and it specifies under what circumstances it will
fire. (I am not sure why it says PROBABLY like that).

If you don't like this format of documentation then you can post a
patch which we will then assess on its merits.

However you cannot keep ignoring that this IS DOCUMENTED, and asking
us over and over to justify ourselves to you.

We owe you nothing. Nothing at all.

We are volunteers working on something we love, and we do not have any
obligation to explain ourselves or justify ourselves to you.

Repeatedly demanding that we do fundamentally mistakes the nature of
the relationship between us.

You are a consumer of something we do for free.

We owe you nothing for being a consumer of our work.

If anything it is the contrary, by using our efforts for your own
purposes you owe us a level of respect and consideration in your
communications with us that you regularly do not show.

That is both disrespectful and plain old rude.

    The fact that "open" will prune spaces in the 2 arg form is

documented under the "open" call.

So what?

    If this was supposed to be a feature\, why isn't it documented

under the stat call the same way open documents its special
handling and under what cases it does so?

Because this warning can fire in many circumstances, specifically any
place that a file operation involving a filename is performed.

That includes many different operations, and were we to document it in
every one our documentation would become completely unwieldy and
impossible to read. So we wont be doing that.

    Only documenting warnings and not the tests themselves makes

for undocumted features that will catch people by surprise.

You keep ignoring that we document the behaviour. That you don't like
where we document it is irrelevent, you cannot go on saying it is
undocumented and expect to be taken at all seriously.

    Usually "undocumented behavior" is considered a bug\.  That

an error message was added to the "errata" section doesn't mean
the feature is documented.

I dont know what a "errata" section is. perldiag is the place we
document ALL warnings and errors AND THEIR CAUSE.

If you chose to use warnings FATAL without thoroughly reading perldiag
then there is nothing we can to do help you except point at the
document and say "read that".

    I\.e\. if you want to keep this wart in perl\, it should be documented

in 'stat' -- not just in an error+warning appendix.

No, Sorry. As I said before, if we took that approach then our docs
would be covered in repeated verbiage. We have perldiag for a reason.

On the other hand if you wished to file a documentation patch we would
consider it on the merits.

    Are there other formats that also trigger warnings?

I dont know. I have never had the need to read perldiag and see. I
dont use fatal warnings very often, would almost NEVER use them in
production, and I would never complain if Perl did what I asked and
died because of a warning I did not expect.

    What other commands have undocumented behaviors and I mean

documented where the command is documented -- not separately in an
error compendium?

Im not here to help you avoid reading the docs. I am here to improve
and maintain the internals.

Your rejection of our documentation is completely irrelevant and irrational.

Furthermore as I have said before WE OWE YOU NOTHING. Demanding that
we demonstrate that something is documented in way that you approve of
is completely unnaceptable. You are not our teachers, our parents nor
our employers, so we owe you no explanations at all.

    Or is documenting these quirks with each command they are implemented in

too much to ask?

Yes it is too much to ask. It is inefficient of the most precious
resource we have, competent developer time. Which you are currently
wasting by baiting me into replying to your mails.

+B<NOTE​:> Users of FATAL warnings, especially those using C<FATAL =>
'all'>
+should be fully aware that they are risking future portability of
their
+programs by doing so.
----

    Actually\, if you don't want warnings spewed out all over the place\,

it is safer to disable warnings entirely in production code.

If you think that is a good policy then please go ahead. I personally
think that is completely insane.

    If I see a warning out of code\, I think something is broken\.

Ok. Thanks for letting us know.

    If perl will randomly cause warning messages to appear in new

versions, I think the best advice would be to shut off warnings
entirely as they are not predictable.

You are free to take whatever actions you wish, that includes other
irresponsible things like not looking both ways when you cross a one
way street, or not reading the fine print of a contract. Its your
choice.

Of course in some cases
extraordinary measures will need to be taken to keep them off, since
as I understand it, simply keeping warnings disabled used to be
sufficient, but that was changed as well in this release and
warnings for the new category are forced on?

I am unfamilar with what you refer and have no interest in
corresponding on the subject.

    I\.e\. "no warnings" needs to be at the front of programs to protect

against unwanted program output corruption?

I am unaware of any requirement or corruption. If you can produce a
test case that shows such corruption then we would consider it on its
merits.

    FWIW\, I keep warnings on and fatal\, as I don't want the program to

continue if there is something like an 'undef' that I didn't catch, as
that usually is a bad thing. Vs. the current crop of new warnings in
5.18, which only have an impact if someone was used to warnings meaning
that something went wrong. It is clear that use of warnings has been
expanded as a way to send messages to users rather than warn of bad
comparisons (integer against string or undefs). That's still, IMO,
not a good use of warnings.

Your opinion is noted. Thanks for sharing.

(sorry for the reply, but I didn't see the question about
behavior documentation or what other commands are affected and/or what
other checks are done)...

See perldiag.

This is the last post I will make in this thread.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From @doughera88

On Tue, Jan 28, 2014 at 06​:53​:16PM -0800, "Linda Walsh via RT" wrote​:

1.5-2 years ago with the advent of 12.1, I found I could no longer
build perl on my system. That state persisted 1.5 years until
I found the source of the problem in the gnu db drivers/source code.

I recall quite a bit of back and forth where folks on this list provided
important clues and devoted considerable time and effort to helping you
diagnose this problem.

a fairly severe problem. Of course both suse and the perl folks
pointed at gnu, rather than ensuring the needs of their customers (or
community) came first.

It is a gdbm bug, so directing the bug report there was appropriate.
However, we both helped diagnose the problem and provided you with a
workaround. I outlined a possible perl patch in RT in [perl #119623].
If you wanted to contribute to the community, you could try making a
patch to GDBM_File.xs and submitting it.

--
  Andy Dougherty doughera@​lafayette.edu

@p5pRT
Copy link
Author

p5pRT commented Jan 29, 2014

From @epa

Linda Walsh via RT <perlbug-followup <at> perl.org> writes​:

If I see a warning out of code, I think something is broken.

I feel the same way, which is why I run my programs with warnings enabled.
If something is broken (or might be broken) then I would like to find out.

If perl will randomly cause warning messages to appear in new
versions, I think the best advice would be to shut off warnings
entirely as they are not predictable.

Well maybe... or perhaps better to arrange for warnings to be sent to the
programmer but not to interrupt normal operation. For example, on a web
site I maintain, warnings go to the web server log and there is a regular
job to examine the log and send the messages by electronic mail. I find
this more useful than either of the two extremes of killing the whole
program on the first warning, or ignoring warnings altogether.

I.e. "no warnings" needs to be at the front of programs to protect
against unwanted program output corruption?

The problem of mixing up program output with diagnostic messages is an old
one. The Unix and perl approach is to have the program's output go to the
standard output stream, while errors and warnings go to standard error.
Then the output will never be corrupted by warning messages. I grant that
the messages still appear on the terminal, which can confuse non-technical
users.

FWIW, I keep warnings on and fatal, as I don't want the program to
continue if there is something like an 'undef' that I didn't catch, as
that usually is a bad thing.

In that case I would suggest

  use warnings FATAL => 'uninitialized';

You may also want to use

  use warnings FATAL => 'syntax';

to make all syntax (compilation) warnings fatal​: this is the rough
equivalent of gcc -Werror.

I would not recommend making *all* warnings fatal; the distinction between
warnings and fatal errors exists for a reason. If you do choose to make
your program exit on any and all warnings, you must accept that its
behaviour may change as warnings are added to the language.

--
Ed Avis <eda@​waniasset.com>

@p5pRT
Copy link
Author

p5pRT commented Jan 30, 2014

From perl-diddler@tlinx.org

On Wed Jan 29 05​:22​:11 2014, doughera wrote​:

On Tue, Jan 28, 2014 at 06​:53​:16PM -0800, "Linda Walsh via RT" wrote​:

1.5-2 years ago with the advent of 12.1, I found I could no longer
build perl on my system. That state persisted 1.5 years until
I found the source of the problem in the gnu db drivers/source code.

I recall quite a bit of back and forth where folks on this list provided
important clues and devoted considerable time and effort to helping you
diagnose this problem.


  My memory isn't what it used to be, but I don't recall any help from
the perl list on that particular bug as I couldn't get the SuSE
RPM to build -- and SuSE supposedly could build it in a "clean-room"
environment. I also found other products that wouldn't build on
a development install of opensuse -- because they don't build or test
bulding their rpms on a full development install -- only one with
the minimal packages necessary for each product (at the end of which
the system image is cleaned for the next product build).

  While I mentioned I had problems building perl -- no advice from the
list allowed me to build it (suggestions to use perl-brew, suggestions
to try various simple options -- nothing worked and everyone was
stumpted. It wasn't until I went looking for any test-suite to run
on the 'db' libraries (independent of perl), and found that nothing
worked that I started looking into the gnu-db code and found the
record problem.

  I submitted a patch to the gnu folks and while the opensuse people
implemented some patch to protect their users, I don't know if it
would fully work, since specifying the DB record size is only an option
in the gnu interface -- not other interfaces that were "compat layers"
on top of the gnu interface. So a fix for all of them only seemed
possible by a fix in the gnu code. AFAIK, they've yet to release
a fix and I know they've not said whether or not my patch was any
good or not (I had no test suite to perform any exhaustive testing,
so I wasn't willing to release it without strong caveats). It
DID solve the build issue on perl, but by the time I wanted to build
perl again, a gdb update had been applied during some suse update, so
I gave up and reformatted my hard disk with a non-optimal record
size -- only yielding about a 20-25% drop in performance.

  I do remember advice on a coredump problem in #78728 where people pointed
to the gnu library or HTML​::Parser with the bug being rejected in 5.10.
As it was still a problem in 5.16, I got a new list of CPAN modules
that were not core. In rewriting parts of it I finally got rid
of the core-dump problem by removing a module that no one thought of
or suggested​:

use mro 'C3';

After that was removed, no more core dumps to this day.

But if people hadn't convinced me how unreliably CPAN was viewed, I
likely wouldn't have gotten to the point of removing that, so one
could say that making me suspicious of any modules not written
by myself, had some positive effect.

A year after the gnu-red-herring, I found another problem in a dedup
program (#100514). While it was fixed, at that point I could no longer
build perl due to the database bug... couple years later, even
with that fixed -- on the same dedup program (that got shelved, and
unshelved -- like the "html-related crawl program got shelved and unshelved
for years", I ran into #119475 which I'm told will make it into 5.20 as
its earliest debut.

I think I've worked around that one by using sysopen/read and not
using perlio....

pointing claiming some perl-core-dump was was in
with perl dumping core on not to use various CPAN
modules not to use CPAN if I wanted to report errors (#78728)

It is a gdbm bug, so directing the bug report there was appropriate.
However, we both helped diagnose the problem and provided you with a
workaround. I outlined a possible perl patch in RT in [perl #119623].


  I seem to remember that, but discarded it working, sicne both the
ndbm and (name escapes me) but other dbm interface don't allow
one to specify a default. If I remember, and if I'm thinking of the
right suggestion, haven't reread it since that time (please kick me if
I am not remembering the suggestion accurately), but I think you suggested
using 'SOME' default (other than 0 -- ) in the gdbm interface -- which
would work for that interface, but not for the others -- and since
they were (believe still are) generated on suse. Since it wouldn't
solve the "whole problem", I didn't want to contribute a partial fix --
especially since the real fix should be in the gnu-db library.

If you wanted to contribute to the community, you could try making a
patch to GDBM_File.xs and submitting it.


  If it would fix the whole problem, that wouldn't be a bad idea, but
since the default of '0' is hard coded in the other 'compat' interfaces,
it would only cause potentially a more confusing situation of some db
calls/tests working with others not, but only on "some systems" (those
using RAID5/RAID6 that would enable duplicating the problem).

  It's not that I didn't think it through. I DID submit a patch to the
gnu people that fixed the root problem on *my system*, but w/no test suite,
I had no idea how it would affect anyone else, so I didn't want to
push that solution around...

(only responding to your note at his time..)

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @karenetheridge

On Thu, Jan 30, 2014 at 08​:30​:29PM -0800, Linda Walsh via RT wrote​:

On Thu Jan 30 19​:38​:03 2014, doughera wrote​:

I think if you read [perl #119537] and [perl #119623], you'll find
that we tried very hard to help.
----
I wasn't relying on memory alone -- I search for anything with @​tlinx.org in the submitter...

The words you are missing here are "thank you".

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @ysth

Yves wrote​:

I answered this already. It is documented in perldiag.

Unsuccessful %s on filename containing newline \(W newline\) A file
operation was attempted on a filename\, and that operation failed\,
PROBABLY because the filename contained a newline\, PROBABLY
because you forgot to chomp\(\) it off\. See "chomp" in perlfunc\.

That specifies that there is a warning "Unsuccessful %s on filename
containing newline", and it specifies under what circumstances it
will fire. (I am not sure why it says PROBABLY like that).

"the filename contained a newline", and the "operation failed,
PROBABLY because" of that, that is, that the filename was wrong and
the actual existing filename has no newline.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 00​:36​:41 2014, sthoenna@​gmail.com wrote​:

Yves wrote​:

I answered this already. It is documented in perldiag.

Unsuccessful %s on filename containing newline \(W newline\) A file
operation was attempted on a filename\, and that operation failed\,
PROBABLY because the filename contained a newline\, PROBABLY
because you forgot to chomp\(\) it off\. See "chomp" in perlfunc\.

That specifies that there is a warning "Unsuccessful %s on filename
containing newline", and it specifies under what circumstances it
will fire. (I am not sure why it says PROBABLY like that).

"the filename contained a newline", and the "operation failed,
PROBABLY because" of that, that is, that the filename was wrong and
the actual existing filename has no newline.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 00​:36​:41 2014, sthoenna@​gmail.com wrote​:

Yves wrote​:

I answered this already. It is documented in perldiag.

Unsuccessful %s on filename containing newline \(W newline\) A file
operation was attempted on a filename\, and that operation failed\,
PROBABLY because the filename contained a newline\, PROBABLY
because you forgot to chomp\(\) it off\. See "chomp" in perlfunc\.

That specifies that there is a warning "Unsuccessful %s on filename
containing newline", and it specifies under what circumstances it
will fire. (I am not sure why it says PROBABLY like that).

"the filename contained a newline", and the "operation failed,
PROBABLY because" of that, that is, that the filename was wrong and
the actual existing filename has no newline.


  But that is not the case. The filename isn't wrong and has a newline.

It's only test-actions (including open/read on non-existent filename)
where you see this arcane message. If you test an existing file
with a "\n" in it or if you create one with "\n" in the name, you get no
warning. Here's a script that creates a file using an io_op array
containing a file handle, format and script.
Then it changes the file handle in the io_op array to print it to
the backup location.

Notice one only gets warned on the test operation and then only if it
doesn't exist​:

====
#!/usr/bin/perl
use warnings; use strict; use P;

my $fname="name"."\n";
open(my $fh, "+>", $fname) or die "Error​: $! opening file";
my @​io_op=($fh, "this is filename \"%s\"", $fname);
P @​io_op;
my $stat=close $fh or die P "Error $! closing \"%s\"", $fname;

P "file created";
if ( -e $fname ) { P "file \"%s\" exists", $fname }
my $bak=$fname."bak";
if ( -e $bak ) {P "bak \"%s\" exists", $bak}
else {
  open(my $fn, "+>$bak") or die "Error​: $! creating backup of file";
  $io_op[0]=$fn;
  P @​io_op;
  my $stat=close $fn or die P "Error $! closing \"%s\"", $bak;
}


1st time output​:

/tmp/tt.pl
file created
file "name
" exists
Unsuccessful stat on filename containing newline at /tmp/tt.pl line 13.

2nd time output​:

/tmp/tt.pl
file created
file "name
" exists
bak "name
bak" exists

ll
ll
total 8
-rw-rw-r-- 1 25 Jan 31 04​:52 name

-rw-rw-r-- 1 25 Jan 31 04​:51 name
bak


Note that the Camel book makes no mention of such behavior --
as it doesn't include an appendix on the error messages one might
get and in what situation -- I don't know of any perl book that does.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @kentfredric

On Fri Jan 31 04​:57​:43 2014, LAWalsh wrote​:

Notice one only gets warned on the test operation and then only if it
doesn't exist​:

Yes, thats because the linefeed character is a legal character in many platforms and filename.

Thus, if you stat a file with a trailing "\n", and it exists, then there is a clear indication that you meant to do that, and no warning is emitted.

However, if you stat a file with a "\n", and no such file exists, there are 2 possibilities​:

1. You meant to find a file with a trailing \n, and it does not exist.
2. You accidentally added a "\n" to the filename prior to stating, and you meant to say it without the "\n".

A warning exists, because case #2 is the most probable.

But in the event you wanted case one, then thats a legitimate use for

{
  no warnings "newline";
  <codethatwarnshere>
}

In essence, when you get a warning, it is not a _certain_ indication of something that is wrong. Its just a 50/50 "you might want to pay attention".

And legitimate solutions include​:

- Solving the problem that caused the warning to be emitted.
- Disabling the warning upon observing that the warning is wrong for your scenario.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 05​:16​:05 2014, kentfredric wrote​:

Yes, thats because the linefeed character is a legal character in many
platforms and filename.

Thus, if you stat a file with a trailing "\n", and it exists, then
there is a clear indication that you meant to do that, and no warning
is emitted.

However, if you stat a file with a "\n", and no such file exists,
there are 2 possibilities​:

1. You meant to find a file with a trailing \n, and it does not exist.
2. You accidentally added a "\n" to the filename prior to stating, and
you meant to say it without the "\n".

A warning exists, because case #2 is the most probable.

- Solving the problem that caused the warning to be emitted.
- Disabling the warning upon observing that the warning is wrong for
your scenario.


I don't know if I would call it a problem or not. It was the name of a file passed to me by Mojolicious that was parsing HTML that had it in something like​:
<img href="xxxxxfile.gif
">

My code was checking to see if it was in a cache, and if not, then caching it -- so one way or the other, my code didn't care -- except that --- because it was a long running program that dump status of what it was doing ... had dumped ~100K lines and not sure when it would have stopped -- any warning spit out wouldn't be seen among the rest of the messages UNLESS it stops the program -- like an 'undef', that got returned for content in a few cases. Wouldn't have stopped the program -- w/o dying on warnings. The undef was a result of it literally reading in 'nothing' for content due a link getting a 404 or pointing to a non-existent site.

But a warning for valid data as returned from the site... and the warning not being documented in with the affected calls -- only able to be found in the error text?

Other than developers of perl5, I would be money that the number of people who read perldiag for each release from start to finish would fall under 10% if you count casual users of perl -- probably < 1%. Providing documentation for such behavior in a place where it won't be seen by most people is bad design.

Example ---- say I want people to do "X" -- like turn off a light in the garage. I can install a camera. I can tell folks or put up a sign that not turning off the light will entail a warning -- because it will be caught. Or I can not tell them and berate them after the fact. Can you tell which I think is the better option?

I opted for neither approach -- I installed it on a 5 press timer, where each press gives 10 minutes more of light, pressing it 6 times and holding it for 10 seconds will disable the timer. Problem solved -- I don't have to play 'heavy', they don't have to be 'wrong', light goes out all the time (no one has ever accidently left it on)...
No one has ever complained about a false positive or a false negative.

For warnings -- if I don't give a traceback, I usually have no clue where they happened, and if I don't stop the execution, I often will have no idea that anything noteworthy happened.

It's a case of something crying wolf that wasn't warned of up front that I thought was bad (not to mention halting execution -- which it has to do or be missed).

Along those lines -- it would be more useful if perl used a system-log that could be reviewed periodically like many other system installed programs.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @iabyn

On Fri, Jan 31, 2014 at 08​:29​:13AM -0800, Linda Walsh via RT wrote​:

Other than developers of perl5, I would be money that the number of
people who read perldiag for each release from start to finish would
fall under 10% if you count casual users of perl -- probably < 1%.

Why on earth would you want to re-read perldiag for each new release? Any
new/changed warnings are documented in the perldelta for that release.

Providing documentation for such behavior in a place where it won't be
seen by most people is bad design.

So for example, the "Use of uninitialized value" warning. Where would you
suggest we document that? There are literally thousands of places in the
perl core that can trigger that warning. We would have to list that
warning within the documentation for just about every perl function and
operator. Similarly, as has already been pointed out to you, the
"Unsuccessful <op> on filename containing newline" warning is triggered
my many different file operations, such as -e, -f, stat() etc. We would
have to describe the warning for each of these cases.

Along those lines -- it would be more useful if perl used a system-log
that could be reviewed periodically like many other system installed
programs.

If you want that, then use a logging module, and use $SIG{__WARN__}
to capture warnings and handle them yourself.

--
Art is anything that has a label (especially if the label is "untitled 1")

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @kentfredric

On Fri Jan 31 08​:29​:13 2014, LAWalsh wrote​:

I don't know if I would call it a problem or not. It was the name of
a file passed to me by Mojolicious that was parsing HTML that had it
in something like​:
<img href="xxxxxfile.gif
">

My code was checking to see if it was in a cache, and if not, then
caching it -- so one way or the other, my code didn't care -- except
that --- because it was a long running program that dump status of
what it was doing ... had dumped ~100K lines and not sure when it
would have stopped -- any warning spit out wouldn't be seen among the
rest of the messages UNLESS it stops the program -- like an 'undef',
that got returned for content in a few cases. Wouldn't have stopped
the program -- w/o dying on warnings. The undef was a result of it
literally reading in 'nothing' for content due a link getting a 404 or
pointing to a non-existent site.

For this specific case, this is an example of why warnings should _not_ be fatalised, but only fatalised in development.

And in this case, I'd be inclined to opt for the lexical `no warnings "newline"` approach.

In either case, the user uploads a file with invalid filenames encoded in XML, its their responsibility to get it right.

Communicating the effect of this warning to the user however is up to you "somehow".

You can either

- Ignore this warning entirely and simply tell the user "that filename does not exist", and potentially confuse them because they don't know XML strings keep whitespace.
- Break standard and try to do the right thing if a \n is there by chomping it yourself
- Emulate the warnings feature by telling the user with some explicit code "uh, ... that looks weird"
- Find a way to communicate all warnings to user regardless of nature ( HIGHLY NOT RECOMMENDED )

I'd probably stick with #1.

You'll have users come to you with "Um, this file doesn't work", and you'll then have to work out why, and it will be in wherever the warnings get piped to.

But a warning for valid data as returned from the site... and the
warning not being documented in with the affected calls -- only able
to be found in the error text?

Other than developers of perl5, I would be money that the number of
people who read perldiag for each release from start to finish would
fall under 10% if you count casual users of perl -- probably < 1%.
Providing documentation for such behaviour in a place where it won't be
seen by most people is bad design.

Just the thing is, in terms of the functionality it warns about, understanding the root cause is really something that aught to be documented, not in Perl,
but in your introduction to Operating Systems, File Systems, and Programming Languages in general.

If you specify a fixed length string to an IO system that takes fixed length strings and uses them verbatim for file mechanics, you will get exactly what you asked for. Even if it is not what you intend.

As such, the warning is not a chastisement, or an instruction to the user as to how they may or may not behave, but a heuristic, a marker that "hey, this is a bit weird, are you sure this is what you want"? If that is what you want, suppress or simply ignore the warning. ( And its only made worse for you by the fact you made the warnings so you cant simply ignore them due to fatalization )

A better analogy is not your lighting problem, but a smoke alarm. Smoke alarms are pesky things. They don't take any action on their own, they exist
merely to make you aware of environmental change that is considered anomalous to the sensor.

If you're near one, and consuming some smoked substance such as tobacco, or cooking something particularly smoky, you're likely to trigger such a sensor, even though there is no real threat present.

You have 2 such options when such a sensor alarm goes off.

You observe, and respond to the sensors indication ( ie​: in the case of the fire breaking out, ... you want to stop that )

Or you can ignore/repress the sensors indication ( some senors have a "temporarily disable" feature , or you can rip the battery out, or you can get really heavy duty earmuffs, or you can just sleep through them screaming if you're really lucky )

Fatalised warnings however, is hooking up your smoke alarms to a fire suppression system. In some cases it calls for it, usually when you're testing that the fire suppression system in fact works... but not in general cases. Would be bad to have an entire office destroyed by mere water damage because somebody didn't realise they couldn't have a smoke within 10m of some doorway.

For warnings -- if I don't give a traceback, I usually have no clue
where they happened, and if I don't stop the execution, I often will
have no idea that anything noteworthy happened.

There are alternatives here. Warnings don't normally give much traceback, because a very large amount of Perl is cli oriented, so a 500 line traceback in the middle of execution for something that can be ignored is pretty bad.

However, if this is what you want, there are ways of making that happen​:

https://metacpan.org/pod/Devel::Confess

^ Now, each and every warning has a complete backtrace. Each and every exception does too, even if it was caught!

It's a case of something crying wolf that wasn't warned of up front
that I thought was bad (not to mention halting execution -- which it
has to do or be missed).

This at very most means an improvement to the documentation of perldoc -f stat and any other applicable functions that /may/ invoke that warning/

But even there, nobody would likely see it.

Mostly, because people are not looking for things until they need to know, and you never expect you needed to know about this case. So no obvious entry point would have prepared you.

Most people only respond to warning messages when they see them unexpectedly, and they encounter the warning because they were not able to anticipate the scenario that occurred to make the warning happen.

Which is why people use FATAL warnings when _testing_, so that they may find any warning sources and eliminate them where possible, and then use non-fatal warnings in production, in conjunction with reading messages sent to STDERR

Along those lines -- it would be more useful if perl used a system-log
that could be reviewed periodically like many other system installed
programs.

There are CPAN modules that provide such a feature, and some propagate warnings / errors to the various logging systems, but the complexity of that is a little too much to be a language-level construct.

Even in heavyweight languages like Java, printing to STDERR is still rather common for CLI things, and proper logging tech like log4j is not employed by everyone. (Heck, some of the time you're lucky to get it in STDERR, some people log to STDOUT :/ )

For web-services, many web toolkits have facilities to flow errors/warnings into server log files out of the box.

But if you're writing something yourself, or just enhancing existing code, a quick scan reveals things like https://metacpan.org/pod/Log::WarnDie , which proxies native warn/die handlers to a sophisticated logging tool of some kind.

And if you're doing anything web dev using the Plack stack, there's this middleware : https://metacpan.org/pod/Plack::Middleware::LogWarn

And if you just want warn/die to flow into Syslog, there's this https://metacpan.org/pod/Carp::Syslog

So you can do exactly as you request right now. Just not with out-of-the-box components without some glue.

The closest you'll get to what you want out of the box without CPAN deps is either

a) funneling log output to STDERR to a file with shell redirection
b) stealing the snippet of code here https://metacpan.org/pod/Carp::Syslog#DESCRIPTION and either using that as-is, or doing something similar.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @Abigail

On Fri, Jan 31, 2014 at 05​:16​:05AM -0800, Kent Fredric via RT wrote​:

Yes, thats because the linefeed character is a legal character in many
platforms and filename.

Thus, if you stat a file with a trailing "\n", and it exists, then there
is a clear indication that you meant to do that, and no warning is emitted.

However, if you stat a file with a "\n", and no such file exists, there
are 2 possibilities​:

1. You meant to find a file with a trailing \n, and it does not exist.
2. You accidentally added a "\n" to the filename prior to stating,
and you meant to say it without the "\n".

A warning exists, because case #2 is the most probable.

Perl could increase the chance of 2. happening instead of 1. If a file
doesn't exist with the given name, and the name contains newlines,
first remove the newlines and see whether a file exists with that name.
And only warn if that this the case. I won't claim it's foolproof, but
I would think it reduces the chance of incorrectly issueing the warning.

Abigail

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 08​:58​:47 2014, davem wrote​:

On Fri, Jan 31, 2014 at 08​:29​:13AM -0800, Linda Walsh via RT wrote​:

Other than developers of perl5, I would be money that the number of
people who read perldiag for each release from start to finish would
fall under 10% if you count casual users of perl -- probably < 1%.

Why on earth would you want to re-read perldiag for each new release?
Any new/changed warnings are documented in the perldelta for that
release.


  There have been changes that were documented in perldelta's for minor
releases and (I think -- not as sure) in the 'odd' devel releases.
Especially in the first, and I think in the 2nd, the announcements
didn't make it into the next major revisions release notes.

Providing documentation for such behavior in a place where it won't
be seen by most people is bad design.

So for example, the "Use of uninitialized value" warning. Where would
you suggest we document that? There are literally thousands of places
in the perl core that can trigger that warning. We would have to list
that warning within the documentation for just about every perl
function and operator.


  Perlop or perlvar. Though in that situation, almost any use of an
  uninitialized var will trigger a warning. The big difference being --
Ricardo says this warning has been in perl since 5.8. Yet this is the
first I've encountered it or heard of it. It's a rare enough situation,
that in order for a developer to take action in advance of "in use
software" hitting the error, it needs to be documented in the 2 calls it
occurs in stat and open of non-existent files (and by stat, that
includes the "-e" file ops -- which is where I hit it, and lstat).
 
  Despite the claims of the end of the world in perl-documentation,
  adding warnings for something the average devel won't encounter in
normal testing, to the commands up front, like the warning about space
stripping at the front of a filename in open, would seem like a minimal
action to take. The idea is to inform people where they are likely to
see it, before it happens -- not where they might look after the fact.
I'm sure you've heard of an ounce of prevention is with a pound of cure
-- this wouldn't have even come up as an issue had it received any
"press" outside of an diag-msg lookup table.
 
 
  The open call says​:

  The filename passed to the one- and two-argument forms of open()
  will have leading and trailing whitespace deleted and normal
  redirection characters honored.

Yet ... lets see...


perl -we'use strict; open(my $fh, "< mytest
") or die "error​: $!";'
Unsuccessful open on filename containing newline at -e line 1.
error​: No such file or directory at -e line 1.


  What do ya know... This IS a bug.

  FWIW, it doesn't give the warning in the case of "+<" nor ">".

But regardless of the op, the white space is stripped. But in the case
of "< mytest", it gives a useless warning about it, then strips it --
which is contrary to the docs -- as they just say the white space will
be stripped.

  Conversely, in the 3 arg format where whitespace is not stripped​:

  perl -we'use strict; open(my $fh, "<", "mytest
") or die "error​: $!";'
error​: No such file or directory at -e line 1.

  There is no warning. (and the white space really isn't stripped
-- not the \n nor the 2 spaces after "mytest").

  That's doubly broken. It gives a warning where it is
documented to strip off the newline, yet it doesn't give a
warning where it is documented to keep the new line.

Similarly, as has already been pointed out to you, the
"Unsuccessful <op> on filename containing newline" warning is triggered
my many different file operations, such as -e, -f, stat() etc. We would
have to describe the warning for each of these cases.


  As was wrongly pointed out, I might add -- it's only trigged in
two places and one of them is broken. The 1 non-broken place is
in stat usage and the related -X file calls. They are all grouped together, so mentioning it there (or a note to see the note on filename checking under
'stat') and under the stat call would seem logical -- certainly it
wouldn't interrupt the documentation flow any more than the comment
about stripping white space w/open.

  But the warning in 'open' is broken.

  My druthers​: since open is broken in 2 ways -- the test would have
to be removed from the 2arg form and added to the 3arg form. This
would have the posibility of breaking existing code in the field.

  Given that and their being only 1 other place it is used​: it would
seem best to get rid of the quirk. It's done wrong with open, and is
technically wrong for stat on many platforms.

  Given that no one has missed it on the 3-arg open (though it is
"documented" to happen, as some might claim), and it is happening on the
2-arg form where it is documented that perl will remove it (it does --
but not before warning you about it! ?!?! What's the point in that?), it
seems to not be very useful and only happens with stat, where it is
guessing about user-data (vs. program correctness -- i.e. warning of
using vars that have not been initialized (undef) is something even 'C'
does).

  Should this bug be reopened, or should 2 new bugs be opened? -- i.e.
one for the open call, and 2nd for documenting this behavior in the
1 case it is checked for -- (stat and it's related -X functions) -- or
maybe it's better just to remove such cruft?

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @rjbs

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-31T16​:45​:07]

The big difference being -- Ricardo says this warning has been in perl since
5.8.

No, not 5.8 (2002), but 5.000 (1994).

--
rjbs

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @b2gills

I wanted to discuss the points brought up in this post as
it seems to be the crux of the issue.

On Sun, Jan 26, 2014 at 9​:36 PM, Linda Walsh via RT
<perlbug-followup@​perl.org> wrote​:

On Sun Jan 26 13​:47​:19 2014, perl@​froods.org wrote​:

Developers must understand the ramifications of 'use warnings FATAL =>
"all"'
or by using the -w option.
-----

Indeed. One must look at reasons why they do things as well.

My background is kernel and security programming. It is common practice to ensure the kernel compiles with no warnings.

This has roots in the security "best practices" as recommended by CERT @​
https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practices

I refer specifically to this section​:

***Heed compiler warnings.***

*** Compile code using the ___highest warning level available for your
*** compiler and eliminate warnings by modifying the code
*** [C MSC00-A, C++ MSC00-A]

---

The problem with this is that C errors are similar to Perl syntax errors,
and C warnings are more similar to Perl's `use strict` with a few warnings
categories added. (as in not all warnings)
The rest of the Perl warnings are more similar to asserts inside of
the C standard library.

In an update to these notes, @​
https://www.securecoding.cert.org/confluence/display/seccode/MSC00-C.+Compile+cleanly+at+high+warning+levels , it says​:

* MSC00-C. Compile cleanly at high warning levels
- Added by Robert C. Seacord, last edited by Carol J. Lallier on Oct 25, 2013
Compile code using the highest warning level available for your
compiler and eliminate warnings by modifying the code.

....

MSC00-EX1​: Compilers can produce diagnostic messages for correct code, ...
[however]...

*** Do not simply quiet warnings***

...Instead, understand the reason for the warning and consider
a better approach, such as using matching types and avoiding
type casts whenever possible.

The reason they don't recommend disabling any warning in C is that you
can't disable a warning in only the location where you know it is fine.
In Perl you can, and sometimes it is necessary.

Actually reading that document, C doesn't even have a way to make sure
that every piece input is validated, whereas Perl does.
( secure coding practice number one. )

Most Perl warnings are more akin to static and dynamic analysis tools,
so Perl is actually better in the second secure coding practice as well.

Perl also helps in security practice number 7 by warning you when you
call stat with a potentially unsanitized data.
( this is exactly the warning that we are discussing )

Perl can't help with number 8, as it says to use
"multiple defensive strategies", using Perl's built-in security
features should only count as one strategy.

It would be difficult for Perl to help with the rest of the top 10 secure
coding practices. ( If we could, we would )

----

As a result of security "best practice advice" for the past 10-15 years, it
has been my habit to always enable warnings and to treat them as fatal errors.

Not doing so would go against "best practices" for secure software.

That's best practices for secure C software.

The best practice for secure Perl software is to to only fatalize some warnings
so that your software continues to work, to help you find the exact problem
that needs fixing. As the root of the problem may be somewhere other than
where the warnings come from. Which might not be evident until you have
many instances of it.

That many in the perl community disagree with enabling warnings and treating
them as fatal errors, demonstrates a lack of experience and knowledge in
security practice. While a lack of knowledge is not a big deal (it is
fixable).

    Willful adherence to ignorance and bad practice is a 1st order danger to the success of a project or product\.

The success of Perl software is sometimes how well it responds to
possible errors,
and that could require that it continue to run after an error.
( a bug riddled server makes more money than one that isn't running at all )

For example I have a piece of software that parses free-form html
files on the internet.
Right now it throws some uninitialized warnings. It didn't when I
wrote it, but the files
changed in the mean time. The easiest way I can debug it is by looking
at the files it
generates, so that means it has to finish completely.

That warning in this case has NOTHING to do with security as it is caused by
not ignoring the undef earlier.
The only reason I would look into this warning is that I could be
dropping some of
the information I am trying to collect. Which means the actual problem
(if there is one) is likely earlier in the program.

    Best practices specifically advise to compile with the highest level of warnings turned on\, treat warnings as ERRORS\, and only ship production code that runs without errors or warnings\.

The only C-like warnings that Perl can detect at compile time are the ones
that are enabled with `use strict` which does treat the C-like warnings as
errors.

What they meant with "treat warnings as errors" is that you should fix
the error BEFORE
ever running the code.
We can't do that with what we call warnings as they don't happen until the code
is running.

Actually sometimes the warnings only happen in production, so how would you fix
such a problem before it goes into production?

    Comments from perl leadership\, established developers et al\, indicate they have no problem suddenly turning on new warnings that can cause established code in production to fail\.

The two big categories of warnings we have added recently
are experimental and deprecated.
The deprecated category warns programmers that they HAVE
to change their code for it to continue to work in the future.
The experimental category warns them that they MAY
have to change their code in the future.

We currently have no other way to programmatically tell programmers
that their code may break in the future. As many/most programmers
don't read and understand every line of every perldelta we have
to do so programmatically. Even if they did read perldelta, it is easier
for Perl to find the problematic code.

That is we are "warning" them about a future problem, so we add it to the
list of warnings. (Where else would we put it)

    The effect of such a policy is to ensure that the shipped code in a language that can generate new warnings on any major update\, is to dissuade best security practices\.  Those responsible for such policies are ultimate responsible for a generally lowering of good security practice in those who use such code\.

Again you're mistaking static language security practices with Perl
security practices. I would very much doubt that the people who wrote
that have ever written a non-trivial Perl program.

    To advise that treating "warnings\-as\-errors"\, and to design and develop software that encourages bad security practice is contributing to to a lowering of security practice in the users of such\.  Indeed\, as evidence of this \, one look at how many CPAN projects build and install w/no warnings\.

Actually since adding `use v5.12` to your code enables strict mode;
we are encouraging good security practices.

As a core language developer wrote​:
"Indeed. One of the ramifications of "/warnings FATAL/"
is that programs which do not die on an older Perl may
die on a later Perl release due to new warnings being added. "

In other words, if one tries to follow best security practices - just to the
extent of using pragmas that make all warnings fatal , then due to the current
design and maintenance team's practices, programs & products in in the field
may die w/o notice (or deprecation cycle), w/a new release of perl.

One of the warnings categories is `deprecated` so by fatalizing every warning
you prevent us from warning you before it becomes an actual problem.

Incompatible programming changes were supposed to be preceeded by
announcements and a deprecation cycle -- with new "features (like new
warnings)" being activated only on an opt-in basis. However, the current
maintenance team has shown they are unable to even follow this practice,
enabling new "warning features" w/o notice, and even activating warnings in
"valid code", processing "valid data" in a misguided example of paternalism.

Incompatible changes are preceded with announcements and a deprecation
cycle. You are capable of disabling the warnings, and your code will
continue to work as it always had. So it is therefore still compatible with your
code.

Again Perl warnings are indicators of POSSIBLE problems, whether
they are actually problems in your code is another matter.

The current maintenance team is obviously free to continue along their path
and likly will continue to ignore any input contrary to what they want to hear,
including citations of software & security "best practices". They main
continue to implement policies punish good software practice though this will
continue to have consequences for the future of this product.

We do NOT ignore ANY input, we just sometimes decide to do something other
than what you think is the best course of action.
( Just because you think it is the best course of action doesn't mean it is )
If / when you have found something better that doesn't break a lot of code;
we may very well act on it.

Actually if we ignored "input contrary to what we wanted to hear"; there
would be ZERO responses to any of your posts.
That there are numerous responses; proves otherwise.

I would point out that a more egregious example of turning off warnings and
errors has to do with blanket filtering out o(or turning off) people who generate warnings and errors who have some minimal experience in such areas. I see this as a worse problem -- in that it is a 2nd-order level of willful
ignorance. Not only is there a refusal to learn, but extended to disallowing even potential sources of alternate opinions.

You don't have to turn off warnings, you just have to not fatalize them all.

This makes for a more toxic environment -- affecting not just a current
revision of a product, but it's entire future. It becomes like compount
interest in how it's effects grow over time. Eventually, the side effects of
such policies become too large to ignore. Creators, of such projects have even
been known to try to "restart/reset" the project with new and incompatible
designs. Sometimes these have succeeded and sometimes not. What is clear is
that bad software practice often generates more of the same.

We have made missteps in the past, and most assuredly will continue to
make more.
Security is one of the places where we have made fewer errors.
We also strive very hard to maintain compatibility, which is why we only enable
a few warnings by default.

I hope I have been sufficiently clear such that you know how "aware" I am of
the "ramifications" of using warnings->FATAL and as well as the ramifications
of NOT doing so.

The ramifications of not fatalizing absolutely every warning; are far
less severe
in most cases than you think they are.

I'm sure that almost every expert level Perl programmer would agree with me
on that point.

And actually Perl warnings are fundamentally different than C warnings,
so they have be handled differently. Even fatalizing all of the warnings
is still handling them differently, as they only show up upon running
the software.

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @ap

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-31 22​:50]​:

Ricardo says this warning has been in perl since 5.8.

“Ricardo says”?

Here’s the 5.000 commit in perl’s repository​:
http​://perl5.git.perl.org/perl.git/blob/a0d0e21ea6ea90a22318550944fe6cb09ae10cda

Here’s what you’d found in line 1848 of perldiag if you opened it 1994​:
http​://perl5.git.perl.org/perl.git/blob/a0d0e21ea6ea90a22318550944fe6cb09ae10cda​:/pod/perldiag.pod#l1847

(I went with the repository because I couldn’t find a full series of
release tarballs going all the way back. <http​://www.cpan.org/src/5.0/>
only goes back to 5.003, and <http​://backpan.perl.org/src/5.0/> has one
5.002 tarball, but rest of the early series seems missing. Though if you
download and look inside it, you’ll find the same warning is documented
there too.)

It took me 5 minutes of looking in the most immediately obvious places
to find this.

It doesn’t matter what Ricardo says, or that it’s Ricardo who is saying
it. You have the power to examine reality and draw your own conclusions.
There is no need to make yourself dependent on the thinking of others.

Regards,
--
Aristotle Pagaltzis // <http​://plasmasturm.org/>

@p5pRT
Copy link
Author

p5pRT commented Jan 31, 2014

From @craigberry

On Fri, Jan 31, 2014 at 5​:02 PM, Brad Gilbert <b2gills@​gmail.com> wrote​:

The reason they don't recommend disabling any warning in C is that you
can't disable a warning in only the location where you know it is fine.

Not disagreeing with the overall thrust of your argument but there are
#pragma directives in most compilers that can do exactly that.
Unfortunately the details are different for every compiler.

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 15​:53​:07 2014, aristotle wrote​:

* Linda Walsh via RT <perlbug-followup@​perl.org> [2014-01-31 22​:50]​:

Ricardo says this warning has been in perl since 5.8.

“Ricardo says”?


The point was not who said it, or if it was in 5.8 or 5.0 -- the point was that it's been buggy since it was implemented.

The check in open is wrong.

The check in stat is 'questionable' -- since it is not documented as "behavior" -- something that can guide you as to what will happen if you do "X", but as a "diagnostic" message -- something that tells you something **after** you have encountered the problem.

Behavior *predicts*. Diagnostics explain.

The difference in meaning is why I said and say this is undocumented *behavior*.

Example. I can read about alcohol and it's effect on behavior and know that it can cause problems. A breathalyzer, blood or urine test are examples of diagnostics.

To figure out how someone is going to behave if I know they might be drinking at a party would be to read about the effects of alcohol on behavior. No one in their right mind would start with a breathalyzer test, before someone started showing problematic symptoms.

Reading diagnostics don't help avoid the issue in the first place. As doctors how well reading a clinical diagnostic manual is in preventing health problems. You don't read diagnostics to prevent problems, you look for documentation on implemented behavior.

In any event...
No matter how you characterize the existing written material, it is clear that it is inaccurate. As such it is questionable that the feature is worth the value of keeping it in the code even for the 1 case (stat & co.) that it produces diagnostic messages for the problem it was trying to diagnose.

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From @ikegami

On Fri, Jan 31, 2014 at 8​:04 PM, Linda Walsh via RT <
perlbug-followup@​perl.org> wrote​:

Example. I can read about alcohol and it's effect on behavior and know
that it can cause problems.

perldoc perldiag

A breathalyzer, blood or urine test are examples of diagnostics.

The tests are in open, stat, etc.

Using use warnings is a decision to use tests tests.

The string emitted to STDERR is the test results.

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From perl5-porters@perl.org

Kent Fredric wrote​:

If you're near one, and consuming some smoked substance such as tobacco,
or cooking something particularly smoky,

Sounds like my cooking!

you're likely to trigger such a
sensor, even though there is no real threat present.

Not even to one's lungs?

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From perl-diddler@tlinx.org

On Fri Jan 31 15​:03​:03 2014, brad wrote​:

I wanted to discuss the points brought up in this post as it seems
to be the crux of the issue.

On Sun Jan 26 13​:47​:19 2014, perl@​froods.org wrote​:

Developers must understand the ramifications of 'use warnings
FATAL => "all"' or by using the -w option.

On Sun, Jan 26, 2014 at 9​:36 PM, Linda Walsh via RT wrote​:

Indeed. One must look at reasons why they do things as well.

My background is kernel and security programming. It is common
practice to ensure the kernel compiles with no warnings.

This has roots in the security "best practices" as recommended by
CERT
@​ https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practices

I refer specifically to this section​:

***Heed compiler warnings.***
*** Compile code using the ___highest warning level available for
*** your compiler and eliminate warnings by modifying the code
*** [C MSC00-A, C++ MSC00-A]
---

The problem with this is that C errors are similar to Perl syntax
errors, and C warnings are more similar to Perl's `use strict` with
a few warnings categories added. (as in not all warnings) The rest
of the Perl warnings are more similar to asserts inside of the
C standard library.
---- The most common warning is something, at least, with static code
  that is often caught at compile time -- unused vars before
initialization.

  Perl is lucky in that it has a canary 'undef' value that can flag
such problems. 'C' has no such value, though a pointer to low mem
can be detected as well a double free's in a malloc lib.

The reason they don't recommend disabling any warning in C is that you
can't disable a warning in only the location where you know it is
fine.
In Perl you can, and sometimes it is necessary.


  I remember pragma's 20+ years back to do just that. They are usually
compiler specific.

Perl also helps in security practice number 7 by warning you when you
call stat with a potentially unsanitized data.
( this is exactly the warning that we are discussing )


  Sort a "unsanitary"​: the code was trying to replicate an existing
web page. It was verified in so much that it produces a valid web
page in most browsers and is successfully parsed by Mojo.

  The input in question is NOT *INVALID* -- from the standpoint that
it works in browsers and Mojo-parsing. Can you image browsers that
popped up warning "popups" with any invalid HTML?

That's best practices for secure C software.

The best practice for secure Perl software is to to only fatalize
some warnings so that your software continues to work, to help you
find the exact problem that needs fixing. As the root of the problem
may be somewhere other than where the warnings come from. Which
might not be evident until you have many instances of it.


  The warnings keep changing -- code that works today w/o warnings may
die tomorrow -- since the question becomes -- what do you do with
warnings you haven't classified -- do you fail "open" or fail "closed"

  If you are the CIA or NSA, you want fail "closed".

  I am not thinking of security as much in terms of violations of
sensitivity but more in terms of violations of integrity.

  If I am writing programs that need to run as root because they are
creating and destroying file systems, *daily*, the consequences of
something disintegrous running amok, can be disastrous. Most of my
personal concerns on many of my perl programs have to do with a concern
of what might happen if I don't fail in "unknown situations" -- like
unexpected warnings.

The success of Perl software is sometimes how well it responds to
possible errors, and that could require that it continue to run
after an error. ( a bug riddled server makes more money than one
that isn't running at all )


  And one that crashes losing all user data doesn't make much at all.
The only reason that some servers continue to make money is the owners
of the servers are not held responsible for the monetary costs of
the security failure.

  Wasn't it Target, that just recently had a score-million credit
cards released to thieves. The costs of that aren't held against
target, but by the credit card companies and card holders.

  Discover used handle all the time costs of switching a lost
or stolen CC#. They also offered more options for customers to keep
their CC# private. (like one-time use numbers and such). They
stopped offering assists w/contacting recurring payment merchants.

  When the onus for contacting merchants (many online) reverted
to the CC-holder, Discover also stopped offering the extra
privacy features for online transactions -- no financial benefit
to them. The costs to contact all merchants might only be an hour
or less, but multiplied by customer base -- not inconsequential.

If merchants w/insufficient security had to pay out of pocket for all
the tangential costs of a break-in, they might change their attitude
about allowing software that Fails "open" and servers that "limp by".

  §§

  With my home server, I've had an opposite stance on the issue of
just allowing it to boot in the face of boot-probs, preferring it come
up, if limping, so that I can login and diagnose the problem vs.
having to create a new disk image via backups.

  Generally, I have tended toward the side of caution, but foolish
consistency is the hobgoblin of small minds. ;-) But I am wary of
ignoring problems my perl scripts that run my systems. Having them
fail could cause partitions and/or file-systems to be "cleaned" and/or
deleted if something went disastrously wrong.

  A simple dedup program "ate" a terabyte or two of data -- that
fortunately, I could mostly recover -- but it wasn't convenient.
I am usually prepared w/backups, though running short on disk
space, am not as well prepared as I'd like to be.

  One of the programs that fails w/5.18 (and a reason I haven't
upgraded yet, is a mail-sorting program that has code dating back
to perl4 days, but has had upgrades to use newer features that
I didn't realize would remain "permanently experimental".

  That issue still needs to be addressed​: If experimental features
  remain unchanged in multiple main versions -- and the release notes
  don't say they are ***STILL*** experimental, they need to either
  have the experimental tag removed, or be removed from perl.

Of course that still doesn't address issues when previously
non-experimental features are switched to experimental status
w/runtime warnings (like lexical "$_"​: no experimental label
was attached to it in 5.16's perlvar docs; if it was exp -- it
was well hidden). Being able to demote features to experimental
status and having them be subject to deletion w/o notice seems
like a clever way to circumvent deprecation policies.

For example I have a piece of software that parses free-form html files on
the internet. Right now it throws some uninitialized warnings. It didn't
when I wrote it, but the files changed in the mean time. The easiest way
I can debug it is by looking at the files it generates, so that means it has
to finish completely.

That warning in this case has NOTHING to do with security as it is
caused by not ignoring the undef earlier. The only reason I would
look into this warning is that I could be dropping some of the
information I am trying to collect. Which means the actual problem
(if there is one) is likely earlier in the program.


  In my programs, having die at the first point of failure with
a traceback is usually the quickest way for me to locate and fix the
problem. Having warnings "cry wolf", is the quickest way to have
them become "background noise" that gets ignored.

Actually sometimes the warnings only happen in production, so how
would you fix such a problem before it goes into production?


  Depends on the SW, if it is critical enough, it might be best
for it to "fail" closed. (die) vs. limp along and reformat
data or expose it to thieves. That makes it especially important
for warnings not to be used in place of release notes (as was
the main excuse I heard for simply turning on incompatible
code changes in 5.18. It was said that CPAN was used as a
code-test base to determine impact. Interesting how no CPAN
modules used warnings->FATAL, BUT --- not that surprising,
given that CPAN is a "library" of modules -- and not a library
of programs -- i.e. there are relatively few programs on CPAN
and it would be up to a program to make warnings fatal or not.

  So, inherently, looking to CPAN for effects of unanticipated
warnings would give a rather false impression -- it's not
a program library, but a code/module library.

Comments from perl leadership, established developers et al, indicate
they have no problem suddenly turning on new warnings that can cause
established code in production to fail.

The two big categories of warnings we have added recently
are experimental and deprecated.
The deprecated category warns programmers that they HAVE
to change their code for it to continue to work in the future.


  I thought deprecation warnings previously existed (?)...

The experimental category warns them that they MAY
have to change their code in the future.


  Actually it points out the uselessness of having code that
is experimental in a "stable/released" version at all. Such
experimental code shouldn't be used in "released" products (which
might, arguably, include "stable" versions of perl.

We currently have no other way to programmatically tell programmers
that their code may break in the future. As many/most programmers
don't read and understand every line of every perldelta we have to
do so programmatically. Even if they did read perldelta, it is
easier for Perl to find the problematic code.


That's why to use the new features, one had to say "use 'feature'",
or most importantly -- to get all the new features of release
5.X.Y, one had to explicitly "use 5.X.Y;" at the top of the
code. That is the point that they should be getting affected
by the new "experimental warnings" feature -- that was turned
on by default and w/o needing the use 5.18 -- as perlpolicy
states should have happened.

That is we are "warning" them about a future problem, so we add it
to the list of warnings. (Where else would we put it)


  1) a future that may never come, and 2) when they use the
newest version (use 5.18)...

The effect of such a policy is to ensure that the shipped code in a
language that can generate new warnings on any major update, is to
dissuade best security practices. Those responsible for such
policies are ultimate responsible for a generally lowering of good
security practice in those who use such code.

Again you're mistaking static language security practices with Perl
security practices. I would very much doubt that the people who wrote
that have ever written a non-trivial Perl program.


  That's part of the problem -- thinking that that what passes for
good practice in SW doesn't apply to perl. There may be exceptions,
agreed, but more often than not, in cases I've brought up to try to
fix, they were warts on the language.

  In that vein -- just as "use version" allows specifying a minimum
version, it seems there should be a way to specify a maximum value
as well. If I relied on a "to-be-deprecated" feature, I might
want to limit it specifically via a similar "use max=5.16.3" type
feature.

Actually since adding `use v5.12` to your code enables strict mode;
we are encouraging good security practices.


  Am aware of that and started using that as a shorthand -- then
was hit with CPAN's desires to make things compat back to 5.8.x,
so had to go back and get rid of "use versions" and resub "use
strict".

One of the warnings categories is `deprecated` so by fatalizing
every warning you prevent us from warning you before it becomes an
actual problem.

  This was one of my complaints -- *I* DO read the deltas when I
upgrade between major versions -- just like I read the diffs
between kernel releases -- you think the diffs between perl
releases might be challenging -- try reading, for example, the
latest "major changes" for linux-3.13 over 3.12 (from someone
who is running 3.13.1) -- *ouch*/painful.

Incompatible changes are preceded with announcements and
a deprecation cycle. You are capable of disabling the warnings, and
your code will continue to work as it always had. So it is therefore
still compatible with your code.


  That's why I asked for a site-wide switch and for such to
be included in the announcement of 5.18. Adding "exports
PERL5OPT=-M-warnings" is a rather brute force way of achieve
such -- but hardly 'great', as who knows what incompat changes
might have been introduced? It's not like it is just 1
script -- well as an guess​:

~/bin> find . -xdev -name RCS -prune -o -iname \*.orig -prune -o -iname \*.bak -prune -o -type f -print0 |xargs -0 grep '^#!/usr/bin/perl' -n |wc -l
445

  And that's just my home-bin directory. Divide that in half
or by 10 -- whatever, it's still alot of scripts to change.

Actually if we ignored "input contrary to what we wanted to hear";
there would be ZERO responses to any of your posts. That there are
numerous responses; proves otherwise.


  Just to clarify -- I can't post to the list -- I can only file
bugs and updates to those bugs on the bug website. Maybe you
were aware of that. But if getting information about changes
was important, and/or hearing opposite points of view before
things were released, it wouldn't be a closed list. Ricardo
wants to call it open, but with '1 exception', from my point
of view -- for a list that was "rated as" "hot" on the perl.org
site describing lists -- it has proved to have the most
sensitive, least tolerant crowd of any list I've been on (this is
"as a whole, and should not be interpreted as a personal comment
to any one person).

  Anyway I find I am going too far off topic (several paragraphs
deleted) -- indicating I'm too tired and need to change gears...

  Thanks for your comments. Yours, at least seem to be
well reasoned, meant, and intentioned...always, bonus points
for that in my book.

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From @b2gills

On Fri, Jan 31, 2014 at 11​:57 PM, Linda Walsh via RT
<perlbug-followup@​perl.org> wrote​:

On Fri Jan 31 15​:03​:03 2014, brad wrote​:

I wanted to discuss the points brought up in this post as it seems
to be the crux of the issue.

On Sun Jan 26 13​:47​:19 2014, perl@​froods.org wrote​:

Developers must understand the ramifications of 'use warnings
FATAL => "all"' or by using the -w option.

On Sun, Jan 26, 2014 at 9​:36 PM, Linda Walsh via RT wrote​:

Indeed. One must look at reasons why they do things as well.

My background is kernel and security programming. It is common
practice to ensure the kernel compiles with no warnings.

This has roots in the security "best practices" as recommended by
CERT
@​ https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practices

The reason they don't recommend disabling any warning in C is that you
can't disable a warning in only the location where you know it is
fine.
In Perl you can, and sometimes it is necessary.
----
I remember pragma's 20+ years back to do just that. They are usually
compiler specific.

Ok you got me; I don't know a lot about C.
I may have actually programmed more in Asm than C.

If I had to pick a low-level language, it would be D.

Perl also helps in security practice number 7 by warning you when you
call stat with a potentially unsanitized data.
( this is exactly the warning that we are discussing )
----

    Sort a "unsanitary"&#8203;: the code was trying to replicate an existing

web page. It was verified in so much that it produces a valid web
page in most browsers and is successfully parsed by Mojo.

    The input in question is NOT \*INVALID\* \-\- from the standpoint that

it works in browsers and Mojo-parsing. Can you image browsers that
popped up warning "popups" with any invalid HTML?

It's unlikely for there to be a file with a newline in its name.
In fact I'm sure there is at least one operating system
where it is impossible to create such a file.

At no point did I say it was invalid.

Normally the only time stat would get a filename with a newline
is if it came from STDIN or a line oriented file format, and no-one
remembered to chomp the newline off.
In such a case it could take someone who was new to Perl hours before
they find the problem, if they ever do.

That it warns is perhaps the best thing it could do.

If you are certain that you don't have a problem in that code,
add a `no warnings 'newline'` in as small of block as possible
with a comment explaining it's purpose.

That's best practices for secure C software.

The best practice for secure Perl software is to to only fatalize
some warnings so that your software continues to work, to help you
find the exact problem that needs fixing. As the root of the problem
may be somewhere other than where the warnings come from. Which
might not be evident until you have many instances of it.
----

    The warnings keep changing \-\- code that works today w/o warnings may

die tomorrow -- since the question becomes -- what do you do with
warnings you haven't classified -- do you fail "open" or fail "closed"

    If you are the CIA or NSA\, you want fail "closed"\.

    I am not thinking of security as much in terms of violations of

sensitivity but more in terms of violations of integrity.

    If I am writing programs that need to run as root because they are

creating and destroying file systems, *daily*, the consequences of
something disintegrous running amok, can be disastrous. Most of my
personal concerns on many of my perl programs have to do with a concern
of what might happen if I don't fail in "unknown situations" -- like
unexpected warnings.

Imagine a some software that modifies a file only to have it stop
half way through because someone fatalized every warning.
Now the file is corrupted.


Imagine a three letter organization is using Perl to find a person of interest.
Now imagine if someone forgot to initialize a scalar to "".

  print $LOG '$some_scalar​: ',$some_scalar if rand > 0.99;

Now that person got away.


Imagine a web site that has an API request that only accepts numbers
but they didn't sanitize their data, because the only thing that used
it was some client side JavaScript.
Now someone comes along and just sends a request using wget or curl
with "abc123" to see what happens.
Talk about a perfect DDOS attack.
Now they have to wake up a developer to go and add a single line
right before the offending line​:
  no warnings 'numeric';

Sometime later they have a meeting and determine that if that API
request gets a similar malformed request that they want it to be the
same as if it were '0'.

So they just leave the line that was added, and call it done.

Now they have to report the incident because they were "hacked".
They could potentially lose a lot of future revenue because of this.

If only they had just logged warnings instead of blindly fatalizing them all,
they could have fixed the code at their leisure.
Then they could just deploy the new code at the next scheduled upgrade.

The success of Perl software is sometimes how well it responds to
possible errors, and that could require that it continue to run
after an error. ( a bug riddled server makes more money than one
that isn't running at all )
----
And one that crashes losing all user data doesn't make much at all.
The only reason that some servers continue to make money is the owners
of the servers are not held responsible for the monetary costs of
the security failure.

If your server always goes down shortly after a hacker attacks it in
a certain way because you fatalized warnings;
Now they know where there is a possible weakness.

Security through obscurity is better than no security at all.

By fatalizing all warnings you have removed the obscurity.

So everywhere your security wasn't absolutely perfect,
hackers will have an easier time finding.

    Wasn't it Target\, that just recently had a score\-million credit

cards released to thieves. The costs of that aren't held against
target, but by the credit card companies and card holders.

That was because they were using unpatched Windows XP
in their point of sale devices.
Even the most secure code needs to be updated once in a while.

If it were from Perl code, and you fatalized then the POS device
would instantly blue-screen when they were on the right track.
Now they have more knowledge with which to attack it.

If instead you logged all of the warnings, and sent them to a central
server; you would be able to tell that something fishy was happening
on that particular device. You could then go and look at the code
that was generating the warnings and harden it against attack.

    Discover used handle all the time costs of switching a lost

or stolen CC#. They also offered more options for customers to keep
their CC# private. (like one-time use numbers and such). They
stopped offering assists w/contacting recurring payment merchants.

    When the onus for contacting merchants \(many online\) reverted

to the CC-holder, Discover also stopped offering the extra
privacy features for online transactions -- no financial benefit
to them. The costs to contact all merchants might only be an hour
or less, but multiplied by customer base -- not inconsequential.

If merchants w/insufficient security had to pay out of pocket for all
the tangential costs of a break-in, they might change their attitude
about allowing software that Fails "open" and servers that "limp by".

Even if they did have to pay, it still might make economical sense to
leave the code alone if it costs more to fix than they are paying out.

Back in the day Ford did a cost-benefit analysis of adding a couple
of inches to a fuel line.

They determined that they could save more than enough money to
cover the monetary damages from lawsuits.

They only saved $11 from each Pinto they produced.
According to the page I read; that saved them $137 million.

( search for "ford pinto case study" )

§§

    With my home server\, I've had an opposite stance on the issue of

just allowing it to boot in the face of boot-probs, preferring it come
up, if limping, so that I can login and diagnose the problem vs.
having to create a new disk image via backups.

    Generally\, I have tended toward the side of caution\, but foolish

consistency is the hobgoblin of small minds. ;-) But I am wary of
ignoring problems my perl scripts that run my systems. Having them
fail could cause partitions and/or file-systems to be "cleaned" and/or
deleted if something went disastrously wrong.

    A simple dedup program "ate" a terabyte or two of data \-\- that

fortunately, I could mostly recover -- but it wasn't convenient.
I am usually prepared w/backups, though running short on disk
space, am not as well prepared as I'd like to be.

I wrote a simple dedup program once, I didn't have a problem
because I just had it spit out a list of duplicate files.
That way I could go and check that my logic wasn't fundamentally flawed.

I've also written quite a few little programs that just modify a file
slightly. I've always had them write to a new file so they can't
break anything.

    One of the programs that fails w/5\.18 \(and a reason I haven't

upgraded yet, is a mail-sorting program that has code dating back
to perl4 days, but has had upgrades to use newer features that
I didn't realize would remain "permanently experimental".

That issue still needs to be addressed​: If experimental features
remain unchanged in multiple main versions -- and the release notes
don't say they are ***STILL*** experimental, they need to either
have the experimental tag removed, or be removed from perl.

Of course that still doesn't address issues when previously
non-experimental features are switched to experimental status
w/runtime warnings (like lexical "$_"​: no experimental label
was attached to it in 5.16's perlvar docs; if it was exp -- it
was well hidden). Being able to demote features to experimental
status and having them be subject to deletion w/o notice seems
like a clever way to circumvent deprecation policies.

Actually popular opinion is that given/when should be changed, not removed.
It may have been the main impetus to add that warning category.

Actually sometimes the warnings only happen in production, so how
would you fix such a problem before it goes into production?
---

Depends on the SW, if it is critical enough, it might be best
for it to "fail" closed. (die) vs. limp along and reformat
data or expose it to thieves. That makes it especially important
for warnings not to be used in place of release notes (as was
the main excuse I heard for simply turning on incompatible
code changes in 5.18. It was said that CPAN was used as a
code-test base to determine impact. Interesting how no CPAN
modules used warnings->FATAL, BUT --- not that surprising,
given that CPAN is a "library" of modules -- and not a library
of programs -- i.e. there are relatively few programs on CPAN
and it would be up to a program to make warnings fatal or not.

I think I would rather have it shut itself down cleanly than
create corrupted data because it was in the middle of something.

Also it may be easier to find out what went wrong if you can
inspect it before it shuts down. Particularly if it doesn't happen
very often.

So, inherently, looking to CPAN for effects of unanticipated
warnings would give a rather false impression -- it's not
a program library, but a code/module library.

Actually we only look for CPAN modules that are broken,
as that could actually be a bug in CORE.

Comments from perl leadership, established developers et al, indicate
they have no problem suddenly turning on new warnings that can cause
established code in production to fail.

The two big categories of warnings we have added recently
are experimental and deprecated.
The deprecated category warns programmers that they HAVE
to change their code for it to continue to work in the future.
----
I thought deprecation warnings previously existed (?)...

I was remembering wrong, I was probably remembering something
about things being added to that category that should have been
in there earlier. ( like $[ )

The experimental category warns them that they MAY
have to change their code in the future.
----
Actually it points out the uselessness of having code that
is experimental in a "stable/released" version at all. Such
experimental code shouldn't be used in "released" products (which
might, arguably, include "stable" versions of perl.

If it wasn't in a stable version of Perl, then no one would use it.
We need people to use it to find bugs or design problems.
We also want to be able to tell people that there may be some
compatibility problems in the future.
So that people who don't want to have to modify their code as
the feature develops can stay away.

Basically almost every added feature from now on is going to be experimental
for a few releases, until we get it nailed down.

( At least that is my impression of the situation )

We currently have no other way to programmatically tell programmers
that their code may break in the future. As many/most programmers
don't read and understand every line of every perldelta we have to
do so programmatically. Even if they did read perldelta, it is
easier for Perl to find the problematic code.
----------

That's why to use the new features, one had to say "use 'feature'",
or most importantly -- to get all the new features of release
5.X.Y, one had to explicitly "use 5.X.Y;" at the top of the
code. That is the point that they should be getting affected
by the new "experimental warnings" feature -- that was turned
on by default and w/o needing the use 5.18 -- as perlpolicy
states should have happened.

That serves a completely different purpose.
It is to allow us to add things without breaking old code.

For example if someone wrote a subroutine named "say"
it would collide with the built-in "say".

Any code that doesn't ask for a feature will continue to work as it always had.
If it does ask for it, it will work in a new way that is incompatible
with the old way.

The effect of such a policy is to ensure that the shipped code in a
language that can generate new warnings on any major update, is to
dissuade best security practices. Those responsible for such
policies are ultimate responsible for a generally lowering of good
security practice in those who use such code.

Again you're mistaking static language security practices with Perl
security practices. I would very much doubt that the people who wrote
that have ever written a non-trivial Perl program.
----
That's part of the problem -- thinking that that what passes for
good practice in SW doesn't apply to perl. There may be exceptions,
agreed, but more often than not, in cases I've brought up to try to
fix, they were warts on the language.

I'm not saying they don't generally apply, only that how you apply the
one isn't always the best way to apply it.

What they tell you to do is eliminate all of the warnings before
it goes into production. With Perl, that is not always possible.
When the code is actually in production it would already be outside
of where that practice was meant to be applied. So perhaps what
it recommends doing shouldn't be applied in production without
careful consideration.

In that vein -- just as "use version" allows specifying a minimum
version, it seems there should be a way to specify a maximum value
as well. If I relied on a "to-be-deprecated" feature, I might
want to limit it specifically via a similar "use max=5.16.3" type
feature.

Actually there is a way to do that now

  no v5.16.4;

One of the warnings categories is `deprecated` so by fatalizing
every warning you prevent us from warning you before it becomes an
actual problem.

This was one of my complaints -- *I* DO read the deltas when I
upgrade between major versions -- just like I read the diffs
between kernel releases -- you think the diffs between perl
releases might be challenging -- try reading, for example, the
latest "major changes" for linux-3.13 over 3.12 (from someone
who is running 3.13.1) -- *ouch*/painful.

I'm just saying that we want to surprise as few people as possible
when a given feature is finally removed.

Also if we did wait a release before we warn, that would delay
the removal another year. That's assuming we remembered
to add the warning after the release is done.

Incompatible changes are preceded with announcements and
a deprecation cycle. You are capable of disabling the warnings, and
your code will continue to work as it always had. So it is therefore
still compatible with your code.
---
That's why I asked for a site-wide switch and for such to
be included in the announcement of 5.18. Adding "exports
PERL5OPT=-M-warnings" is a rather brute force way of achieve
such -- but hardly 'great', as who knows what incompat changes
might have been introduced? It's not like it is just 1
script -- well as an guess​:

~/bin> find . -xdev -name RCS -prune -o -iname \*.orig -prune -o -iname \*.bak -prune -o -type f -print0 |xargs -0 grep '^#!/usr/bin/perl' -n |wc -l
445

And that's just my home-bin directory. Divide that in half
or by 10 -- whatever, it's still alot of scripts to change.

"exports PERL5OPT=-M-warnings=deprecated"

I actually have more than one version of Perl installed so that if I want
to see if a piece of code works on 5.14.1 I can just run it​:

perl-5.14.1 -E'...'

perl-5.14.1 is a symlink in /opt/perl/bin to /opt/perl-5.14.1/bin/perl

It does mean that I have to run cpanp at least once for every
version I have currently when I want to update them.

I've been thinking about writing some code to do this for me,
but it hasn't been that big of a hassle yet.

So if you did that you could just change the shebang line to point
to a different version of Perl.

Actually if we ignored "input contrary to what we wanted to hear";
there would be ZERO responses to any of your posts. That there are
numerous responses; proves otherwise.
---
Just to clarify -- I can't post to the list -- I can only file
bugs and updates to those bugs on the bug website. Maybe you
were aware of that. But if getting information about changes
was important, and/or hearing opposite points of view before
things were released, it wouldn't be a closed list. Ricardo
wants to call it open, but with '1 exception', from my point
of view -- for a list that was "rated as" "hot" on the perl.org
site describing lists -- it has proved to have the most
sensitive, least tolerant crowd of any list I've been on (this is
"as a whole, and should not be interpreted as a personal comment
to any one person).

I was aware of that, I was referring to the posts on RT and
the emails you sent before you were blocked.

As to the openness of the list, they have listened to what I have
to say, and I was not in the community at all until I sent in my
first patch.
( I'm still not really "in the community", but that has more to
do with me than anything else )

Anyway I find I am going too far off topic (several paragraphs
deleted) -- indicating I'm too tired and need to change gears...

Thanks for your comments. Yours, at least seem to be
well reasoned, meant, and intentioned...always, bonus points
for that in my book.

I don't think we are ever going to see eye to eye on this, so
I'm going to mute this conversation once I send this.

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From perl-diddler@tlinx.org

On Sat Feb 01 03​:29​:49 2014, brad wrote​:

Let me start by saying I probably won't reply to each point --
AND that I think we are in basic agreement main points, but
how to get from here to there ... well the devil's in the
details.

It's unlikely for there to be a file with a newline in its name.
In fact I'm sure there is at least one operating system
where it is impossible to create such a file.


  I'm sure there are some But Linux and NT aren't them
I think the Windows interfaces do strip alot of that out,
but the underlying NT calls are count based not 0-terminated.

  It's a great way [sic] for both malware and DRM (actually,
is there a difference?) to install keys in your registry and files
on your disk that windows tools can't touch -- if they can see it
at all. At least for the registry there's a sysinternal's tool
to remove null containing entries (it uses the undocumented
NT interfaces.
 

If you are certain that you don't have a problem in that code,
add a `no warnings 'newline'` in as small of block as possible
with a comment explaining it's purpose.


I'm certain it is properly lookintg for and would create a
valid cache file that would properly be able to be looked up
in future references - though I may be changing that area of
the code and substituting local filenames for the http​:// filesnames,
so the file with the \n in it may go by the wayside anyway.

Imagine a some software that modifies a file only to have it stop
half way through because someone fatalized every warning.
Now the file is corrupted.


  Now that's another discussion -- I try to write my sofware
to 'fail-safe' -- i.e. like "updatedb" -- generate the new
db in a tmp file and only after successfully building the new
locate.db, is it put in place. I try to construct my files
that way.

Imagine a web site that has an API request that only accepts numbers
but they didn't sanitize their data, because the only thing that used
it was some client side JavaScript.
Now someone comes along and just sends a request using wget or curl
with "abc123" to see what happens.


  That's why you limit the # of transactions 1 instance of the prog
to only a few or '1'. So errors in 1 transaction don't mess up
others.

Now they have to report the incident because they were "hacked".
They could potentially lose a lot of future revenue because of this.
===
  [if people cared], but Intel has produced chips that couldn't do
math and it still grew into a monopoly for the x86 market.

If only they had just logged warnings instead of blindly fatalizing them all,
they could have fixed the code at their leisure. Then they could just deploy
the new code at the next scheduled upgrade.


  The needs of external security programs are different than those
that operate on the inside of a firewall and are worried more
about data integrity than privacy.

If your server always goes down shortly after a hacker attacks it in
a certain way because you fatalized warnings;
Now they know where there is a possible weakness.


  That would be bad, but if it goes down more than once due to the
same problem, having it disable the SW might not be the worst thing.

  When my mail filter worked better -- (it's more held together
with duct tape and bailing wire these days), when it encountered
a fatal error, it killed off the 'fetchmail program that was
responsible for retrieving new mail -- I let it accumulate on
an ISP, until I fix the error.

For some reason it doesn't kill itself like it used to - but in
parallel with normal mail delivery, sendmail send a raw stream to
another internal address -- so all messaages arrive twice on my system.
If something causes the filter to start bounding messages,
the "unfiltered box" still collects and I split off any messages
newer than the fail-point. They get replayed through the 'fixed'
version -- both as a 'test', as well as having all such email
categorized by the new fixed system.

By fatalizing all warnings you have removed the obscurity.


  Some, but not all.

So everywhere your security wasn't absolutely perfect,
hackers will have an easier time finding.


  Maybe, maybe not -- if the old system of killing
incoming email and allowing it to accumulate upstream
(error returned from send mail was 'temporary failure', try
again later).

Wasn't it Target, that just recently had a score-million credit
cards released to thieves. The costs of that aren't held against
target, but by the credit card companies and card holders.

That was because they were using unpatched Windows XP in their point
of sale devices. Even the most secure code needs to be updated once
in a while.


  When I first went from XP to Win7, Win7 ate my disk at least
3 times in the first 3 months -- (before transitioning to Win7, I
made sure to remove all data from system and put it on a linux
server that serves it up via roaming profiles (should I wait for
them).

If it were from Perl code, and you fatalized then the POS device
would instantly blue-screen when they were on the right track.
Now they have more knowledge with which to attack it.


  In my case, not. The defective SW falls back to fail-safe
operation.

If instead you logged all of the warnings, and sent them to a central
server; you would be able to tell that something fishy was happening
on that particular device. You could then go and look at the code
that was generating the warnings and harden it against attack.


  I got warnings coming out my ... pores! I almost never have time
to review them all. It's on my task list to automate their
processing... but only so much of me to go around.

Even if they did have to pay, it still might make economical sense to
leave the code alone if it costs more to fix than they are paying out.

(Ford-Pinto bombs noted!)
The care industry is chumpchange. Notice how the financy industry
took the US-taxpayer to the cleaners over costs of measuring in the
tillions in the long run. Corrupt GOP leaders didn't bat an eyelash
in forcing the US tax-payers to give up the equivalent of about
a half a year's total domestic product -- yet when the chumps in
detroit asked for less than 1/100th of that amount to help them
stay in business -- they were laughed at.

  Human life is not valued very highly -- but the bank and
investment accounts of the wealthiest .5% of Americans --
that gets top notice.

A simple dedup program "ate" a terabyte or two of data -- that
fortunately, I could mostly recover -- but it wasn't convenient.
I am usually prepared w/backups, though running short on disk
space, am not as well prepared as I'd like to be.

I wrote a simple dedup program once, I didn't have a problem
because I just had it spit out a list of duplicate files.
That way I could go and check that my logic wasn't fundamentally
flawed.
mine was years past doing that -- and it worked on files less
than a gig or two. Coming up with the list list of _potential_
dups takes about 1/4th to 1/5th the time.

The actual linking and deleting takes most of the time​:
Example (a partial dup of my font dir copied to a 2nd dir;
caches dropped before run​:

time ndedup 1 2
Paths​: 10188, uniq nodes​: 10188, tsize​: 7.6GB (7.6GB alloc).
4045 size queues with 2 or more files
Longest Q​: 18 items w/size 5672500 ea.
Explore & Sort finished at 3.222s
Creating HoSSL finished at 3.686s
@​ finish it found 4.2GB in 5777 duplicate files found.
129.65sec 19.98usr 16.66sys (28.27% cpu)


So spaning the tree took 3.2s,
data structs took another .44s,
w/compares + links adding over 2 minutes.

Just the 1st 2 steps on my home dir​:
Paths​: 251380, uniq nodes​: 71600, tsize​: 115.6GB (114.1GB alloc).
12366 size queues with 2 or more files
Longest Q​: 4490 items w/size 0 ea.
Explore & Sort finished at 70.221s
Creating HoSSL finished at 74.797s

--- (program still 'under constrction')

I've also written quite a few little programs that just modify a file
slightly. I've always had them write to a new file so they can't
break anything.


Ditto. to link a->b, it creates hard-link tmp name in dir where 'b'
is, then deletes real 'b', then moves b-tmp to original b-name.
All done to verify permissions, w/tmp file having name of
program that did creating & has workds "_to_delete_ in the tmp
name. I **try** to be careful.

In debug mode, the files are compared twice -- once with my
algorithm (pure perl), and 2nd with external 'cmp' prog -- if
they don't agree, emits warning and doesn't do link.

Of course that still doesn't address issues when previously
non-experimental features are switched to experimental status
w/runtime warnings (like lexical "$_"​: no experimental label
was attached to it in 5.16's perlvar docs; if it was exp -- it
was well hidden). Being able to demote features to experimental
status and having them be subject to deletion w/o notice seems
like a clever way to circumvent deprecation policies.

Actually popular opinion is that given/when should be changed, not
removed.
It may have been the main impetus to add that warning category.


  I was refering more to "my $_" -- given/when were listed as
experimental in 5.16.

I think I would rather have it shut itself down cleanly than
create corrupted data because it was in the middle of something.


  Well, middle in my case is in writing to a tmp file, so
not really "dirty".

Also it may be easier to find out what went wrong if you can
inspect it before it shuts down. Particularly if it doesn't happen
very often.


 
  That's why I generate alot of fatal diagnostics on a
warning -- want to be able reconstruct failure.

I actually have more than one version of Perl installed so that if I
want
to see if a piece of code works on 5.14.1 I can just run it​:

perl-5.14.1 -E'...'

perl-5.14.1 is a symlink in /opt/perl/bin to /opt/perl-5.14.1/bin/perl


Um... getting there​:

Ishtar​:law> ll /home/perl
total 32
drwxrwxr-x+ 2 8192 Jan 8 22​:05 doc/
drwxrwxr-x+ 3 16 Jan 1 16​:11 perl-5.10.1/
drwxrwxr-x+ 3 16 Jan 1 15​:59 perl-5.12.5/
drwxrwxr-x+ 7 58 Jan 23 15​:37 perl-5.16.3/
drwxrwxr-x+ 4 27 Jan 4 15​:36 perl-5.8.9/
Ishtar​:law> ll -d /usr/bin/perl*|grep -P 'perl-?5'
lrwxrwxrwx 1 25 Jan 22 17​:36 /usr/bin/perl -> /usr/bin/perl-5.16.3/perl*
drwxrwxr-x 2 4096 Jan 22 17​:36 /usr/bin/perl-5.16.2/
lrwxrwxrwx 1 26 Jan 22 17​:33 /usr/bin/perl-5.16.3 -> /home/perl/perl-5.16.3/bin/
-rwxr-xr-x 4 1566232 Jan 26 2013 /usr/bin/perl5.16.2*
lrwxrwxrwx 1 31 Jan 22 17​:36 /usr/bin/perl5.16.3 -> /usr/bin/perl-5.16.3/perl5.16.3*
-rwxr-xr-x 2 1703769 Aug 23 15​:53 /usr/bin/perl5.18.0*


Not quite fleshed out fully, but just started doing that
recently...

So if you did that you could just change the shebang line to point
to a different version of Perl.


(or my path or symlinks in /usr/bin...)

I don't think we are ever going to see eye to eye on this...

Considering the similarities in handling many things,
I find it odd for you to say so....

Forgive any braino's -- I was sleep typing most of
this (couldn't sleep, but was tired'er than heck).

@p5pRT
Copy link
Author

p5pRT commented Feb 1, 2014

From @kentfredric

On Fri Jan 31 21​:57​:47 2014, LAWalsh wrote​:

The input in question is NOT *INVALID* -- from the standpoint that
it works in browsers and Mojo-parsing. Can you image browsers that
popped up warning "popups" with any invalid HTML?

Actually, the defacto behaviour for Browsers is the same as the defacto behaviour for perl.

1. When there is an error in the HTML parse, a warning _is_ emitted. Sometimes the warning is serious enough that you get output on the terminal you opened your browser from. Other times you need to find the "developer console", which will for most web pages, log any number of possible warning conditions that could be indications of runtime problems.

2. Like Perl, browsers do not fatal. You can augment it somewhat I think to make it fatal, but nobody really does this.

3. Like Perl, browsers keep going as best they can, because fatal exits are entirely unwanted.

A "popup" and a "warning" are 2 very very different behaviours and you really aught not to conflate the two. A popup is a blocking interface that _demands_ response.

A warning is simply a message designed in such a way that you may ignore it if you so desire.

Fatalising this warning is user error if dying on petty things that might not be a problem is unacceptable.

You're not going to complain "Well, those browsers should be adjusted so the warnings don't happen", the warnings exist for a reason, and they're there for the utility of people who care to look.

You're not going to successfully argue that those warnings should be fatal, because 90% of the internet would spontaneously be off limits to you.

And you're not going to successfully argue that those warnings should not be warnings, but should instead be documented behaviours, and the warnings themselves should be documented, for a great plethora of reasons.

And worse, and this is important​: Even if it was documented to behave as such, the odds of you reading the documentation and seeing the notes about the possible warning in advance of the warning occur are so low, its negligible.

And if you documented it absolutely everywhere it was relevant to do so, the documentation would become obscene and unreadable due to the substantially increased context one would have to learn to read and read selectively.

For instance, methods like 'require' in perldoc perlfunc are already 196 lines of text for that one function. Are you telling me you read and understood every line of that prior to ever using require? `use` is another 124 lines, have you read and fully understood each and every line? Have you read all 417 lines of the documentation for `open` ? Additionally, have you read all the documentation in perlport about the `open` function, something that is suggested at the bottom of the `open` documentation?

I'm not saying you most certainly haven't, but I would imagine most people haven't, because many/most people are only interested in what is relevant to them, and will skim for context that matches their mental keyword scanner.

And that mentality is very useful, but its somewhat blind to seeing things that they don't yet know is relevant to them.

Which is why warnings occur​: Because we accept that you're not able to know everything, and we accept that being human, you are fallible, and some of the time your failings look like intent, and vice versa.

Hence, we have heuristics for known, common problems that people face, so that they can avoid these problems.

In essence, there is literally _nothing_ you can do to improve the documentation to prepare people for these issues. If they don't read the documentation, or the updates to the documentation, they can't be helped. Yet, they will still encounter side effects of what they do, things the never expected would be a problem, and the heuristic lets them know "Hey, this could be a problem", and the diagnostic data gives more in-depth to the kind of problem expressed.

And worse, even people who _know_ about the problems they can face, _still_ make the mistakes that these warnings exist to guard against.

These people will realize, upon seeing the warning "whoops, I made that mistake again, time to fix".

And again, for these people, no documentation enhancement will help. They know its a problem. Just they don't know they're causing the problem at the time they cause it.

I don't mean to say "no documentation at all" here, just I'm rather aware that sometimes, far too much documentation is identically problematic as not enough. You can improve on this with formatting and layout sometimes, but its _really_ _really_ _hard_.

If one was to document this specific warning case in every user facing function that could trigger, I'd suggest that instead of every function documenting it individually, the functions should say : "NB. This functions internals are defined in terms of X, Y and Z, and anything that is applicable there is applicable here too", and then document the propensity for the warning at the inner most callpoint.

At least, you could say then "Its documented", just the documentation would be a little hard to read.

I'd really love to have some sort of inheritance aware documentation viewer that rendered perldoc for things /including/ their deeply nested implementation documentation in a close proximity, but thats something we just simply do not have yet.

@p5pRT
Copy link
Author

p5pRT commented Feb 2, 2014

From perl-diddler@tlinx.org

On Sat Feb 01 06​:58​:10 2014, kentfredric wrote​:

On Fri Jan 31 21​:57​:47 2014, LAWalsh wrote​:

The input in question is NOT *INVALID* -- from the standpoint that
it works in browsers and Mojo-parsing. Can you image browsers that
popped up warning "popups" with any invalid HTML?

Actually, the defacto behaviour for Browsers is the same as the
defacto behaviour for perl.

1. When there is an error in the HTML parse, a warning _is_ emitted.
=====

  There is a difference.

  The warnings in browsers goto a brower-specific, separate console.
They are not intermixed with program output.

  That's the crux of the problem w/warnings in perl. If you don't
know about the warning in *advance*, the only way to prevent the
warning from intermixing with program output is to turn all warnings
off.

  Note -- browser errors, like page not found, are sent to a
separate location from regular rendering --- usually a separate
page. They are not intermixed with normal output. Another difference
is that no browser that I am aware of will keep rendering outpu
such that it scrolls off the screen. **IF** you are luck, you
have perl output going to a window that can be scrolled-back, but
given that tty-output is relatively unformatted, warning and error
messages usually don't stand out from normal output -- UNLESS,
they terminate the output. Only if they terminate execution
do error or warning messages "stand out" from other output so
they will be noticed.

  Apart from security -- just NOTICING the message -- or being
aware of it, is the first step to solving any problems.

2. Like Perl, browsers do not fatal. You can augment it somewhat I
think to make it fatal, but nobody really does this.


  Aside from my browser crashing on some fatal problems (1/2 ;-),
You CAN make error sections or error text in a browser stand out
with color, larger fonts, or by springing open a popup.

  It used to be the case (don't know about now), that IE would,
by default, jump into a debugger upon an error (and maybe some
warnings -- dunno, didn't/don't use IE enough).

3. Like Perl, browsers keep going as best they can, because fatal
exits are entirely unwanted.


  Um... browsers just don't work in the face of errors. I click
on links that do nothing or try to enter input in what should be
input boxes, and have nothing echoed. Perl rarely does that
in response to an error -- though it can go off into an infinite
loop.

  FF -- when something gets stuck in an infinite loop for more than
10 seconds tells you a script on the page is busy -- do you wish to
continue or stop it? Perl never offers such a choice. The closest
related action I can think of in perl would be early detection and
termination of infinite recursion.

A "popup" and a "warning" are 2 very very different behaviours and you
really aught not to conflate the two. A popup is a blocking interface
that _demands_ response.


  Maybe -- if the popup grabs focus, then yes -- but if it doesn't,
it simply becomes a place where some message is displayed out-side
of the flow of normal output.

A warning is simply a message designed in such a way that you may
ignore it if you so desire.


  Not always with browsers. They have popups that require responses --
like this message that cannot be turned off​:

  "Although this page is encrypted, the information you have entered is
sent over an unencrypted connection and could easilyu be read by a third
party. Are you sure you want to continue sending this information?"

  Used to see that all the time on some microsoft web pages where it
signed you in through 'live', but redirected you to an unencrypted page.

Fatalising this warning is user error if dying on petty things that
might not be a problem is unacceptable.


  People filed a bug on that behavior, because unlike many similar
security warnings (mixed content, or going from secure to an
insecure page), it didn't allow the user to disable the message.
The behavior was justified as being something the user shouldn't
be able to turn off.

You're not going to successfully argue that those warnings should be
fatal, because 90% of the internet would spontaneously be off limits
to you.


  Much of it is. I run w/NoScript AND Request Policy (allows
permitting inter-site requests that get blocked by default w/its
standard policies in place). Similarly, w/noscript and it's
restrictions block a great number of sites by default -- it requires
specifically permitting sites and inter-site communications for things
to work.

  If I determine that something a warning, warns me about, isn't
a problem. I silence that warning in that circumstance. I.e. my
browser acts like my perl. Things fail by default. I would be
rather dismayed if ALL sites broke w/out scripts enabled, but
for years, good web page design has preached "graceful degradation".

  For perl, a parallel mechanism, might be to allow whatever
functionality that existed "prior to the new warnings", to continue
to function w/o problems -- but to get any new features, one would
have to accept new warnings as well...

  I.e. the policy of adding new warning features unless the user
specicially wants the new features of that version. I'm sure
people will say the two situations are different, BUT, IF one were
to draw parallels between the two -- how would graceful degradation
look if one didn't allow or ask for the new 'experimental warnings'
feature?

And you're not going to successfully argue that those warnings should
not be warnings, but should instead be documented behaviours, and the
warnings themselves should be documented, for a great plethora of
reasons.
====
  But a user of a browser has never had to read any documentation to
operate a browser. They do need to read extenisive documentation if
they want to write an extension that works with DOM and other security
features in a browser.

  Ignoring warnings in javascript, will, more often than not, cause
features NOT to work. It's harder to partition a perl program
into working and non-working areas.

  How often (or how many examples) can you think of where each program
section is sandboxed from any other section and only communicate through
specific interfaces? It's rare for a perl program to be defined that
way, in my experience.

And worse, and this is important​: Even if it was documented to behave
as such, the odds of you reading the documentation and seeing the
notes about the possible warning in advance of the warning occur are
so low, its negligible.


  To operate a browser, I agree. To program javascript to operate with
the DOM-security model or within security params setup by
modern browsers, I'd disagree -- you can't get things to work if you
don't read documation.

And if you documented it absolutely everywhere it was relevant to do
so, the documentation would become obscene and unreadable due to the
substantially increased context one would have to learn to read and
read selectively.

For instance, methods like 'require' in perldoc perlfunc are already
196 lines of text for that one function. Are you telling me you read
and understood every line of that prior to ever using require? `use`
is another 124 lines, have you read and fully understood each and
every line? Have you read all 417 lines of the documentation for
`open` ? Additionally, have you read all the documentation in
perlport about the `open` function, something that is suggested at the
bottom of the `open` documentation?


  Much of the require and use documentation is new. However, I would
say that yes -- I've read the entire sections on each of those
before using them -- as I didn't know how to use them w/o reading them.

  The perlport stuff -- if I am looking to make something portable,
I read that -- but things written before that section was around that
worked and still work? Nope. I have since, but I can't say I've
read every new line added. I'm not aware of any perl documentation
that comes with "change bars" so I can see what's changed from the
last reading.

I'm not saying you most certainly haven't, but I would imagine most
people haven't, because many/most people are only interested in what
is relevant to them, and will skim for context that matches their
mental keyword scanner.


  Agreed.

And that mentality is very useful, but its somewhat blind to seeing
things that they don't yet know is relevant to them.


  Agreed, BUT I was aware of open striping blanks from ages ago --
I wasn't aware it was supposed to strip all trailing white space --
and that the warning of 'newlines' following a filename CANNOT happen
in an implementation that follows the documentation.

If one was to document this specific warning case in every user facing
function that could trigger, I'd suggest that instead of every
function documenting it individually...


  There are only 2 areas or functions that have this warning
(3 if you count lstat separately from stat).

  Under stat, you would have a warning that embedded newlines in
non-existing files that are stat'ed will have a warning about such
on the premise that it might have been accidental.

  Under lstat, you would have a similar warning (see warning about
embedded newlines under 'stat').

  Under the -X file calls​: since all of these use the stat function,
the same warning about embedded newlines applies here (see 'stat').

  If you look at all the cases documented for the "-X" calls, that
above documentation would be an additional 2 lines to the current
140.

  If you really think that is confusing. Compare it to the length
of this bug report.

  Second -- if you are talking heuristics... it should likely
be the case that such a test is only done for newline at the end
of the file name, since​:

my $fname="test.
log";

looks less accidental than

my $fname="test
";

Third -- the check on the open call is just wrong. It
contradicts the existing documenation.

I'd really love to have some sort of inheritance aware documentation
viewer that rendered perldoc for things /including/ their deeply
nested implementation documentation in a close proximity, but thats
something we just simply do not have yet.


It's called HTML. But for line-oriented manpages, you could go for
an arcane format like "info pages" that gnu uses, but those are
rather hard to use if you aren't already an emacs user and not
nearly as portable as manpages (which can be reformatted into
printer ready postscript or HTML...(as pod pages can be, as well).

@p5pRT
Copy link
Author

p5pRT commented Feb 3, 2014

From @Abigail

On Sun, Feb 02, 2014 at 01​:17​:49PM -0800, Linda Walsh via RT wrote​:

The warnings in browsers goto a brower\-specific\, separate console\.

They are not intermixed with program output.

That's the crux of the problem w/warnings in perl\.  If you don't

know about the warning in *advance*, the only way to prevent the
warning from intermixing with program output is to turn all warnings
off.

Program output goes to STDOUT. Warnings go to STDERR.

Those were different channels long before Tim Berners-Lee invented the web.

Abigail

@p5pRT
Copy link
Author

p5pRT commented Feb 3, 2014

From perl-diddler@tlinx.org

On Mon Feb 03 01​:43​:56 2014, abigail@​abigail.be wrote​:

On Sun, Feb 02, 2014 at 01​:17​:49PM -0800, Linda Walsh via RT wrote​:

The warnings in browsers goto a brower\-specific\, separate console\.

They are not intermixed with program output.

That's the crux of the problem w/warnings in perl\.  If you don't

know about the warning in *advance*, the only way to prevent the
warning from intermixing with program output is to turn all warnings
off.

Program output goes to STDOUT. Warnings go to STDERR.

Those were different channels long before Tim Berners-Lee invented the web.


  Eh? Different channels? As we've noted in this discussion, in
the runtime of browsers redirect errors so they don't intermix on
on the user's display.

  And perl's runtime (because perl is a compiler, linker and a runtime),
does not redirect STDERR. By default, STDOUT and STDERR have gone
to the user's current terminal.
 
  The perl runtime lacks the feature of the browser of directing
STDOUT and STDERR to separate locations. Note, before in the conversation,
I did mention the possibility that perl could redirect errors or warnings
to a log file. The answer was 'no' That's up to each program to
workout for itself. With the browser, the default is to separate those
channels. With perl's runtime, it is usually the case that they are
intermixed.

@p5pRT
Copy link
Author

p5pRT commented Feb 4, 2014

From @iabyn

On Fri, Jan 31, 2014 at 01​:45​:07PM -0800, Linda Walsh via RT wrote​:

The open call says&#8203;:

    The filename passed to the one\- and two\-argument forms of open\(\)
    will have leading and trailing whitespace deleted and normal
    redirection characters honored\.

Yet ... lets see...
-------------------

perl -we'use strict; open(my $fh, "< mytest
") or die "error​: $!";'
Unsuccessful open on filename containing newline at -e line 1.
error​: No such file or directory at -e line 1.

-----

What do ya know\.\.\.  This IS a bug\.  

Agreed, I don't think it should warn in this case.

FWIW\, it doesn't give the warning in the case of "\+\<" nor ">"\.

That's because it only warns on read-only file access.

Conversely\, in the 3 arg format where whitespace is not stripped&#8203;:

perl \-we'use strict; open\(my $fh\, "\<"\, "mytest  

") or die "error​: $!";'
error​: No such file or directory at -e line 1.

There is no warning\.

Which I think is a bug.

(and the white space really isn't stripped
-- not the \n nor the 2 spaces after "mytest").

Which I hope we both agree is the correct behaviour.

--
"But Sidley Park is already a picture, and a most amiable picture too.
The slopes are green and gentle. The trees are companionably grouped at
intervals that show them to advantage. The rill is a serpentine ribbon
unwound from the lake peaceably contained by meadows on which the right
amount of sheep are tastefully arranged." -- Lady Croom, "Arcadia"

@p5pRT
Copy link
Author

p5pRT commented Feb 4, 2014

From perl-diddler@tlinx.org

On Tue Feb 04 04​:29​:51 2014, davem wrote​:

On Fri, Jan 31, 2014 at 01​:45​:07PM -0800, Linda Walsh via RT wrote​:

The open call says&#8203;:

    The filename passed to the one\- and two\-argument forms of open\(\)
    will have leading and trailing whitespace deleted and normal
    redirection characters honored\.

Yet ... lets see...
-------------------

perl -we'use strict; open(my $fh, "< mytest
") or die "error​: $!";'
Unsuccessful open on filename containing newline at -e line 1.
error​: No such file or directory at -e line 1.

-----

What do ya know\.\.\.  This IS a bug\.  

Agreed, I don't think it should warn in this case.

FWIW\, it doesn't give the warning in the case of "\+\<" nor ">"\.

That's because it only warns on read-only file access.


Right -- just noting, RO & file not existing.

Conversely\, in the 3 arg format where whitespace is not stripped&#8203;:

perl \-we'use strict; open\(my $fh\, "\<"\, "mytest  

") or die "error​: $!";'
error​: No such file or directory at -e line 1.

There is no warning\.

Which I think is a bug.


  Given that the 2 arg for says to use the 3 arg to avoid white space
stripping, it seems a bit malicious to then throw out a warning for
a case that was recommended to be used to get literal file name usage.

  I would also point out that inserting sach a warning at this point
in time, -- given the code involving this goes back to 5.0, there is
a good change of causing further incompatibilies. I.e. If I was
being cautious, I wouldn't.

(and the white space really isn't stripped
-- not the \n nor the 2 spaces after "mytest").

Which I hope we both agree is the correct behaviour.


Yup

Which leaves the 3 2-3 lines comments I added avove documenting
behavior (vs. including documeation on diagnotics. --

However, given that it is only tested in 1 place and only when the file
doesn't exist, it seems like a very special case for something that
would likely be very rare. Since perl has quite a few special cases,
and the increasing plethora of such over time becomes an increasing
large anchor -- it becomes increasing difficult to streamline perl
and focus on language improvements vs. keeping rarely. It sounds
like someone's "pet" check to solve a specific problem they had rather
than addressing a common case. In 40 years of programming, I've never
encountered that problem.

Wouldn't it be a sign of wisdom to take this opportunity to remove this
bit of fluff -- not matter how small?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant