Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failures only on an AMD A10 Kaveri #15996

Closed
p5pRT opened this issue Jun 3, 2017 · 29 comments
Closed

Test failures only on an AMD A10 Kaveri #15996

p5pRT opened this issue Jun 3, 2017 · 29 comments

Comments

@p5pRT
Copy link

p5pRT commented Jun 3, 2017

Migrated from rt.perl.org#131489 (status was 'rejected')

Searchable as RT131489$

@p5pRT
Copy link
Author

p5pRT commented Jun 3, 2017

From zarniwhoop@ntlworld.com

Created by zarniwhoop@ntlworld.com

I'm using linuxfromscratch, and normally I only test perl when I
am in chroot building a new system. Because we use system zlib
(and bazip2) we now get 9 test failures (both in 5.24.1 and 5.26.0)
on all builds [ cpan/Compress-Raw-Zlib and cpan/IO-Compress ] and
that is fine and understood.

However, on this one machine (AMD A10 Kaveri) I get three extra
failures (since 5.24.1, which was the first version I built on it,
and I've built that several times, all with similar failures).

  ../dist/Time-HiRes/t/itimer.t
  ../lib/Benchmark.t
  op/time.t

dist/Time-HiRes/t/itimer .......................................
t/itimer.t​: overall time allowed for tests (360s) exceeded!
FAILED--expected 2 tests, saw 1

lib/Benchmark .................................................. Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
FAILED--expected 213 tests, saw 10

t/op/time ...................................................... # Failed test 2 - very basic times test at op/time.t line 33
FAILED at test 2

Retesting in a completed system shows the same failures are present.
Looking back at all my logs from this system, lib/Benchmark.t
sometimes runs 10 tests and other times it runs 14, but always
reporting division by zero, either at line 77 or at line 93.

The other 2 tests look as if it is running too slowly. Current
kernels and recent gcc do AMD processors no favours, but much older
and slower AMD CPUs get normal results in the perl tests.

At the moment, idle cores claim to be ruinning at 1400MHz, the core
running the tests will typically be at between 1400 (e.g. switched to
a different core which had been idle) and 3600 MHz.

Any advice on trying to diagnose these problems (maybe a poor kernel
config) would be appreciated.

This isn't the world's fastest machine, and on another AMD last year
I had the perl tests hanging indefinitely (overnight) which in the
end was caused by a bad choice when configuring my linux kernel
(CONFIG_LEGACY_VSYSCALL_EMULATE instead of _NONE). So, when I saw
that the chroot tests were taking a long time, I went to look at the
test log with tail -f.

My command to run the tests is​:
  make -k test >somelog.check 2>&1 || true

Unfortunately, although the log showed all the commands for building
the test programs, none of the test results were present. At the
time, ps aux showed it WAS running Benchmark.t. Fortunately, the
tests eventually finished and at that time all the details were in
the log.

Retesting now, on a finished system, tail -f on a log works ok so the
problem seems to be some sort of deficiency in the chroot system (I'm
sure tail -f used to work on the perl test log, I've done it several
times to check progress). Again, any thoughts on what (probably an
absence of something) might cause this ?

The data below is from the completed system. On this build I'm
running gcc-7.1 but on a very similar build on an intel with 5.24.1
the test results were normal, and testing 5.26.0 on the completed
system on the intel also produces the normal results. Those are my
only 7.1 builds, the rest were gcc-6.3.

Thanks for reading.

ĸen

Perl Info

Flags:
    category=core
    severity=low

Site configuration information for perl 5.26.0:

Configured by root at Thu Jun  1 01:17:40 UTC 2017.

Summary of my perl5 (revision 5 version 26 subversion 0) configuration:
   
  Platform:
    osname=linux
    osvers=4.11.0
    archname=x86_64-linux-thread-multi
    uname='linux deluxe 4.11.0 #8 smp preempt wed may 31 22:42:01 bst 2017 x86_64 gnulinux '
    config_args='-des -Dprefix=/usr -Dvendorprefix=/usr -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -Dpager=/usr/bin/less -isR -Duseshrplib -Dusethreads'
    hint=recommended
    useposix=true
    d_sigaction=define
    useithreads=define
    usemultiplicity=define
    use64bitint=define
    use64bitall=define
    uselongdouble=undef
    usemymalloc=n
    default_inc_excludes_dot=define
    bincompat5005=undef
  Compiler:
    cc='cc'
    ccflags ='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64'
    optimize='-O2'
    cppflags='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include'
    ccversion=''
    gccversion='7.1.0'
    gccosandvers=''
    intsize=4
    longsize=8
    ptrsize=8
    doublesize=8
    byteorder=12345678
    doublekind=3
    d_longlong=define
    longlongsize=8
    d_longdbl=define
    longdblsize=16
    longdblkind=3
    ivtype='long'
    ivsize=8
    nvtype='double'
    nvsize=8
    Off_t='off_t'
    lseeksize=8
    alignbytes=8
    prototype=define
  Linker and Libraries:
    ld='cc'
    ldflags =' -fstack-protector-strong -L/usr/local/lib'
    libpth=/usr/local/lib /usr/lib/gcc/x86_64-pc-linux-gnu/7.1.0/include-fixed /usr/lib /lib/../lib /usr/lib/../lib /lib
    libs=-lpthread -lnsl -lgdbm -ldl -lm -lcrypt -lutil -lc -lgdbm_compat
    perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
    libc=libc-2.25.so
    so=so
    useshrplib=true
    libperl=libperl.so
    gnulibc_version='2.25'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs
    dlext=so
    d_dlsymun=undef
    ccdlflags='-Wl,-E -Wl,-rpath,/usr/lib/perl5/5.26.0/x86_64-linux-thread-multi/CORE'
    cccdlflags='-fPIC'
    lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'



@INC for perl 5.26.0:
    /usr/lib/perl5/site_perl/5.26.0/x86_64-linux-thread-multi
    /usr/lib/perl5/site_perl/5.26.0
    /usr/lib/perl5/vendor_perl/5.26.0/x86_64-linux-thread-multi
    /usr/lib/perl5/vendor_perl/5.26.0
    /usr/lib/perl5/5.26.0/x86_64-linux-thread-multi
    /usr/lib/perl5/5.26.0


Environment for perl 5.26.0:
    HOME=/home/ken
    LANG=en_GB.UTF-8
    LANGUAGE (unset)
    LC_ALL=en_GB.UTF-8
    LD_LIBRARY_PATH (unset)
    LOGDIR (unset)
    PATH=/bin:/usr/bin:/home/ken/bin
    PERL_BADLANG (unset)
    SHELL=/bin/bash
-- 
I live in a city. I know sparrows from starlings.  After that
everything is a duck as far as I'm concerned.  -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 3, 2017

From @jkeenan

On Sat, 03 Jun 2017 00​:17​:37 GMT, zarniwhoop@​ntlworld.com wrote​:

(copying the file from the machine on which I prepared it)

This is a bug report for perl from zarniwhoop@​ntlworld.com,
generated with the help of perlbug 1.40 running under perl 5.26.0.

Ken, you raise a variety of issues here. I can foresee that there's enough here for multiple RT tickets. So I'll try to comment only where I think I understand the issues you're raising.

I'm using linuxfromscratch,

This is probably the first report we've had from someone using this distribution (http​://www.linuxfromscratch.org/).

and normally I only test perl when I
am in chroot building a new system. Because we use system zlib
(and bazip2) we now get 9 test failures (both in 5.24.1 and 5.26.0)
on all builds [ cpan/Compress-Raw-Zlib and cpan/IO-Compress ] and
that is fine and understood.

These two libraries are maintained upstream on CPAN. Have you reported these test failures in their own bug trackers?

https://rt.cpan.org/Dist/Display.html?Name=Compress-Raw-Zlib

https://rt.cpan.org/Dist/Display.html?Name=IO-Compress

The maintainer of these distributions is quite responsive, so I would urge you to report these failures as per their documentation​:

http​://search.cpan.org/~pmqs/Compress-Raw-Zlib-2.074/

http​://search.cpan.org/~pmqs/IO-Compress-2.074/

However, on this one machine (AMD A10 Kaveri) I get three extra
failures (since 5.24.1, which was the first version I built on it,
and I've built that several times, all with similar failures).

../dist/Time-HiRes/t/itimer.t
../lib/Benchmark.t
op/time.t

dist/Time-HiRes/t/itimer .......................................
t/itimer.t​: overall time allowed for tests (360s) exceeded!
FAILED--expected 2 tests, saw 1

lib/Benchmark ..................................................
Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
FAILED--expected 213 tests, saw 10

t/op/time ...................................................... #
Failed test 2 - very basic times test at op/time.t line 33
FAILED at test 2

Would you be able to provide email attachments for the output of each of the following commands?

#####
cd t; ./perl harness -v ../dist/Time-HiRes/t/itimer.t; cd -

cd t; ./perl harness -v ../lib/Benchmark.t; cd -

cd t; ./perl harness -v op/time.t, cd -
#####

Retesting in a completed system shows the same failures are present.
Looking back at all my logs from this system, lib/Benchmark.t
sometimes runs 10 tests and other times it runs 14, but always
reporting division by zero, either at line 77 or at line 93.

The other 2 tests look as if it is running too slowly. Current
kernels and recent gcc do AMD processors no favours, but much older
and slower AMD CPUs get normal results in the perl tests.

At the moment, idle cores claim to be ruinning at 1400MHz, the core
running the tests will typically be at between 1400 (e.g. switched to
a different core which had been idle) and 3600 MHz.

Any advice on trying to diagnose these problems (maybe a poor kernel
config) would be appreciated.

This isn't the world's fastest machine, and on another AMD last year
I had the perl tests hanging indefinitely (overnight) which in the
end was caused by a bad choice when configuring my linux kernel
(CONFIG_LEGACY_VSYSCALL_EMULATE instead of _NONE). So, when I saw
that the chroot tests were taking a long time, I went to look at the
test log with tail -f.

My command to run the tests is​:
make -k test >somelog.check 2>&1 || true

Unfortunately, although the log showed all the commands for building
the test programs, none of the test results were present. At the
time, ps aux showed it WAS running Benchmark.t. Fortunately, the
tests eventually finished and at that time all the details were in
the log.

Retesting now, on a finished system, tail -f on a log works ok so the
problem seems to be some sort of deficiency in the chroot system (I'm
sure tail -f used to work on the perl test log, I've done it several
times to check progress). Again, any thoughts on what (probably an
absence of something) might cause this ?

The data below is from the completed system. On this build I'm
running gcc-7.1 but on a very similar build on an intel with 5.24.1
the test results were normal, and testing 5.26.0 on the completed
system on the intel also produces the normal results. Those are my
only 7.1 builds, the rest were gcc-6.3.

It appears that we do not yet have any smoke test reports for gcc-7.1 or -6.3 on any platform (http​://perl5.test-smoke.org/search).

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jun 3, 2017

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented Jun 3, 2017

From @demerphq

On 3 June 2017 at 02​:17, via RT <perlbug-followup@​perl.org> wrote​:

# New Ticket Created by
# Please include the string​: [perl #131489]
# in the subject line of all future correspondence about this issue.
# <URL​: https://rt-archive.perl.org/perl5/Ticket/Display.html?id=131489 >

(copying the file from the machine on which I prepared it)

This is a bug report for perl from zarniwhoop@​ntlworld.com,
generated with the help of perlbug 1.40 running under perl 5.26.0.

-----------------------------------------------------------------
[Please describe your issue here]

I'm using linuxfromscratch, and normally I only test perl when I
am in chroot building a new system. Because we use system zlib
(and bazip2) we now get 9 test failures (both in 5.24.1 and 5.26.0)
on all builds [ cpan/Compress-Raw-Zlib and cpan/IO-Compress ] and
that is fine and understood.

However, on this one machine (AMD A10 Kaveri) I get three extra
failures (since 5.24.1, which was the first version I built on it,
and I've built that several times, all with similar failures).

    \.\./dist/Time\-HiRes/t/itimer\.t
    \.\./lib/Benchmark\.t
    op/time\.t

dist/Time-HiRes/t/itimer .......................................
t/itimer.t​: overall time allowed for tests (360s) exceeded!
FAILED--expected 2 tests, saw 1

lib/Benchmark .................................................. Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
FAILED--expected 213 tests, saw 10

t/op/time ...................................................... # Failed test 2 - very basic times test at op/time.t line 33
FAILED at test 2

Retesting in a completed system shows the same failures are present.
Looking back at all my logs from this system, lib/Benchmark.t
sometimes runs 10 tests and other times it runs 14, but always
reporting division by zero, either at line 77 or at line 93.

The other 2 tests look as if it is running too slowly. Current
kernels and recent gcc do AMD processors no favours, but much older
and slower AMD CPUs get normal results in the perl tests.

At the moment, idle cores claim to be ruinning at 1400MHz, the core
running the tests will typically be at between 1400 (e.g. switched to
a different core which had been idle) and 3600 MHz.

Any advice on trying to diagnose these problems (maybe a poor kernel
config) would be appreciated.

This isn't the world's fastest machine, and on another AMD last year
I had the perl tests hanging indefinitely (overnight) which in the
end was caused by a bad choice when configuring my linux kernel
(CONFIG_LEGACY_VSYSCALL_EMULATE instead of _NONE). So, when I saw
that the chroot tests were taking a long time, I went to look at the
test log with tail -f.

I cant speak to chroot, but the tests that are failing time things,
and apparently timing things isnt working reliably on your box.

I know that Benchmark tests can produce bizarro output if timing the
empty loop takes longer than it takes to execute the code being
benchmarked. (Which shouldnt happen, but *can* happen under certain
extreme use cases.)

Yves

My command to run the tests is​:
make -k test >somelog.check 2>&1 || true

Unfortunately, although the log showed all the commands for building
the test programs, none of the test results were present. At the
time, ps aux showed it WAS running Benchmark.t. Fortunately, the
tests eventually finished and at that time all the details were in
the log.

Retesting now, on a finished system, tail -f on a log works ok so the
problem seems to be some sort of deficiency in the chroot system (I'm
sure tail -f used to work on the perl test log, I've done it several
times to check progress). Again, any thoughts on what (probably an
absence of something) might cause this ?

The data below is from the completed system. On this build I'm
running gcc-7.1 but on a very similar build on an intel with 5.24.1
the test results were normal, and testing 5.26.0 on the completed
system on the intel also produces the normal results. Those are my
only 7.1 builds, the rest were gcc-6.3.

Thanks for reading.

ĸen

[Please do not change anything below this line]
-----------------------------------------------------------------
---
Flags​:
category=core
severity=low
---
Site configuration information for perl 5.26.0​:

Configured by root at Thu Jun 1 01​:17​:40 UTC 2017.

Summary of my perl5 (revision 5 version 26 subversion 0) configuration​:

Platform​:
osname=linux
osvers=4.11.0
archname=x86_64-linux-thread-multi
uname='linux deluxe 4.11.0 #8 smp preempt wed may 31 22​:42​:01 bst 2017 x86_64 gnulinux '
config_args='-des -Dprefix=/usr -Dvendorprefix=/usr -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -Dpager=/usr/bin/less -isR -Duseshrplib -Dusethreads'
hint=recommended
useposix=true
d_sigaction=define
useithreads=define
usemultiplicity=define
use64bitint=define
use64bitall=define
uselongdouble=undef
usemymalloc=n
default_inc_excludes_dot=define
bincompat5005=undef
Compiler​:
cc='cc'
ccflags ='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64'
optimize='-O2'
cppflags='-D_REENTRANT -D_GNU_SOURCE -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include'
ccversion=''
gccversion='7.1.0'
gccosandvers=''
intsize=4
longsize=8
ptrsize=8
doublesize=8
byteorder=12345678
doublekind=3
d_longlong=define
longlongsize=8
d_longdbl=define
longdblsize=16
longdblkind=3
ivtype='long'
ivsize=8
nvtype='double'
nvsize=8
Off_t='off_t'
lseeksize=8
alignbytes=8
prototype=define
Linker and Libraries​:
ld='cc'
ldflags =' -fstack-protector-strong -L/usr/local/lib'
libpth=/usr/local/lib /usr/lib/gcc/x86_64-pc-linux-gnu/7.1.0/include-fixed /usr/lib /lib/../lib /usr/lib/../lib /lib
libs=-lpthread -lnsl -lgdbm -ldl -lm -lcrypt -lutil -lc -lgdbm_compat
perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
libc=libc-2.25.so
so=so
useshrplib=true
libperl=libperl.so
gnulibc_version='2.25'
Dynamic Linking​:
dlsrc=dl_dlopen.xs
dlext=so
d_dlsymun=undef
ccdlflags='-Wl,-E -Wl,-rpath,/usr/lib/perl5/5.26.0/x86_64-linux-thread-multi/CORE'
cccdlflags='-fPIC'
lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'

---
@​INC for perl 5.26.0​:
/usr/lib/perl5/site_perl/5.26.0/x86_64-linux-thread-multi
/usr/lib/perl5/site_perl/5.26.0
/usr/lib/perl5/vendor_perl/5.26.0/x86_64-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.26.0
/usr/lib/perl5/5.26.0/x86_64-linux-thread-multi
/usr/lib/perl5/5.26.0

---
Environment for perl 5.26.0​:
HOME=/home/ken
LANG=en_GB.UTF-8
LANGUAGE (unset)
LC_ALL=en_GB.UTF-8
LD_LIBRARY_PATH (unset)
LOGDIR (unset)
PATH=/bin​:/usr/bin​:/home/ken/bin
PERL_BADLANG (unset)
SHELL=/bin/bash
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

--
perl -Mre=debug -e "/just|another|perl|hacker/"

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From zarniwhoop@ntlworld.com

On Fri, Jun 02, 2017 at 06​:09​:50PM -0700, James E Keenan via RT wrote​:

On Sat, 03 Jun 2017 00​:17​:37 GMT, zarniwhoop@​ntlworld.com wrote​:

Ken, you raise a variety of issues here. I can foresee that there's enough here for multiple RT tickets. So I'll try to comment only where I think I understand the issues you're raising.

I'm using linuxfromscratch,

This is probably the first report we've had from someone using this distribution (http​://www.linuxfromscratch.org/).

Yeah, perl's tests mostly work by the time we get to a released
version. And we try to avoid unreleased or development versions for
most of what we do. From time to time some of us get occasional
errors, but they tend to be occasional, or machine-specific like
my current problem.

Like hte joke about hte boy who didn't speak for years, I guess
everything has been fine until now, or maybe I've just been too lazy
to comment on the errors I get. The trouble is, on some test builds
things go badly wrong and it's clear that I have a local problem.
But nowadays I mostly look at the later Beyond Linuxfromscratch stuff.

and normally I only test perl when I
am in chroot building a new system. Because we use system zlib
(and bazip2) we now get 9 test failures (both in 5.24.1 and 5.26.0)
on all builds [ cpan/Compress-Raw-Zlib and cpan/IO-Compress ] and
that is fine and understood.

These two libraries are maintained upstream on CPAN. Have you reported these test failures in their own bug trackers?

https://rt.cpan.org/Dist/Display.html?Name=Compress-Raw-Zlib

https://rt.cpan.org/Dist/Display.html?Name=IO-Compress

The maintainer of these distributions is quite responsive, so I would urge you to report these failures as per their documentation​:

I think we looked at the tests, saw a version was hardcoded, and
decided to just tell our users that those batches of the tests will
fail. I guess I can create more work for the maintainer (even just
closing as 'invalid' or 'not our problem' is work).

http​://search.cpan.org/~pmqs/Compress-Raw-Zlib-2.074/

http​://search.cpan.org/~pmqs/IO-Compress-2.074/

However, on this one machine (AMD A10 Kaveri) I get three extra
failures (since 5.24.1, which was the first version I built on it,
and I've built that several times, all with similar failures).

../dist/Time-HiRes/t/itimer.t
../lib/Benchmark.t
op/time.t

dist/Time-HiRes/t/itimer .......................................
t/itimer.t​: overall time allowed for tests (360s) exceeded!
FAILED--expected 2 tests, saw 1

lib/Benchmark ..................................................
Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
FAILED--expected 213 tests, saw 10

t/op/time ...................................................... #
Failed test 2 - very basic times test at op/time.t line 33
FAILED at test 2

Would you be able to provide email attachments for the output of each of the following commands?

#####
cd t; ./perl harness -v ../dist/Time-HiRes/t/itimer.t; cd -

cd t; ./perl harness -v ../lib/Benchmark.t; cd -

cd t; ./perl harness -v op/time.t, cd -
#####

I seem to have problems with invoking subshells, the attachments are
only for the ./perl harness ... part. These are from a fresh build
on the completed system (not in /tmp this time :) and I let the
tests begin to run so that ./perl was in place.

It appears that we do not yet have any smoke test reports for gcc-7.1 or -6.3 on any platform (http​://perl5.test-smoke.org/search).

But 6.3 is so last year ;-)

BTW - thanks for your patch in #131337 - when we build our temporary
'/tools' system to give us what we need in chroot we tend to do
default builds. With 5.26.0 and gcc-7.1 we hit the same problem.
In our build of the final system we build shared libs in perl which
skips over that issue.

ĸen
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From zarniwhoop@ntlworld.com

t/itimer.t​: overall time allowed for tests (360s) exceeded!
../dist/Time-HiRes/t/itimer.t ..
1..2
# I am the main process 17771, starting the watchdog process...
# The watchdog process 17772 launched, continuing testing...
# I am the watchdog process 17772, sleeping for 360 seconds...
# setitimer​: 0 0
ok 1 - ITIMER_VIRTUAL defined with sufficient granularity
# getitimer​: 0.501 0.4
Terminating main process 17771...
# This is the watchdog process 17772, over and out.
Failed 1/2 subtests

Test Summary Report


../dist/Time-HiRes/t/itimer.t (Wstat​: 9 Tests​: 1 Failed​: 0)
  Non-zero wait status​: 9
  Parse errors​: Bad plan. You planned 2 tests but ran 1.
Files=1, Tests=1, 361 wallclock secs ( 0.00 usr 0.01 sys + 0.01 cusr 359.97 csys = 359.99 CPU)
Result​: FAIL

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From zarniwhoop@ntlworld.com

Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
../lib/Benchmark.t ..
1..213
ok 1 - 'Ensure we can create a benchmark object' isa 'Benchmark'
ok 2 - 'timeit CODEREF' isa 'Benchmark'
ok 3 - benchmarked code was run 5 times
ok 4 - 'timeit eval' isa 'Benchmark'
ok 5 - benchmarked code was run 5 times
ok 6 - benchmarked code called without arguments
# Burning CPU to benchmark things; will take time...
ok 7 - 'countit 0, CODEREF' isa 'Benchmark'
ok 8 - benchmarked code was run
# in_threesecs=6103 iterations
ok 9 - iters returned positive iterations
ok 10 - 3s cpu3 is at least 3s
Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 203/213 subtests

Test Summary Report


../lib/Benchmark.t (Wstat​: 65280 Tests​: 10 Failed​: 0)
  Non-zero exit status​: 255
  Parse errors​: Bad plan. You planned 213 tests but ran 10.
Files=1, Tests=10, 6086 wallclock secs ( 0.00 usr 0.01 sys + 0.28 cusr 6084.22 csys = 6084.51 CPU)
Result​: FAIL

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From zarniwhoop@ntlworld.com

# Failed test 2 - very basic times test at op/time.t line 33
op/time.t ..
1..72
ok 1 - very basic time test
not ok 2 - very basic times test
ok 3 - localtime() list context
ok 4 - month day
ok 5 - year
ok 6 - localtime(), scalar context
ok 7 - changes to $ENV{TZ} respected
ok 8 - gmtime() list conext
ok 9 - month day
ok 10 - year
ok 11 - gmtime() and localtime() agree what day of year
ok 12 - gmtime(), scalar context
ok 13 - gmtime(1) list context
ok 14 - scalar
ok 15 - gmtime(1073741824) list context
ok 16 - scalar
ok 17 - gmtime(-2147483648) list context
ok 18 - scalar
ok 19 - gmtime(-34359738368) list context
ok 20 - scalar
ok 21 - gmtime(-1) list context
ok 22 - scalar
ok 23 - gmtime(549755813888) list context
ok 24 - scalar
ok 25 - gmtime(2147483648) list context
ok 26 - scalar
ok 27 - gmtime(0) list context
ok 28 - scalar
ok 29 - gmtime(4294967296) list context
ok 30 - scalar
ok 31 - gmtime(-4294967296) list context
ok 32 - scalar
ok 33 - localtime(1296000) list context
ok 34 - scalar
ok 35 - localtime(-8589934592) list context
ok 36 - scalar
ok 37 - localtime(5000000000) list context
ok 38 - scalar
ok 39 - localtime(1163500000) list context
ok 40 - scalar
ok 41 - localtime(-1296000) list context
ok 42 - scalar
ok 43
ok 44 - Ignore fractional time
ok 45
ok 46 - Ignore fractional time
ok 47 - year check, gmtime(-281474976710656)
ok 48 - year check, localtime(-281474976710656)
ok 49 - year check, gmtime(-4.5035996273705e+15)
ok 50 - year check, localtime(-4.5035996273705e+15)
ok 51 - year check, gmtime(-70368744177664)
ok 52 - year check, localtime(-70368744177664)
ok 53 - year check, gmtime(281474976710656)
ok 54 - year check, localtime(281474976710656)
ok 55 - year check, gmtime(4.5035996273705e+15)
ok 56 - year check, localtime(4.5035996273705e+15)
ok 57 - year check, gmtime(70368744177664)
ok 58 - year check, localtime(70368744177664)
ok 59
ok 60
ok 61
ok 62
ok 63
ok 64
ok 65
ok 66
ok 67
ok 68
ok 69
ok 70
ok 71 - [perl \#123495] gmtime(NaN)
ok 72 - localtime(NaN)
Failed 1/72 subtests

Test Summary Report


op/time.t (Wstat​: 0 Tests​: 72 Failed​: 1)
  Failed test​: 2
Files=1, Tests=72, 21 wallclock secs ( 0.00 usr 0.02 sys + 0.00 cusr 20.00 csys = 20.02 CPU)
Result​: FAIL

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From @jkeenan

zarniwhoop@​ntlworld.com wrote​:

However, on this one machine (AMD A10 Kaveri) I get three extra
failures (since 5.24.1, which was the first version I built on it,
and I've built that several times, all with similar failures).

../dist/Time-HiRes/t/itimer.t
../lib/Benchmark.t
op/time.t

dist/Time-HiRes/t/itimer .......................................
t/itimer.t​: overall time allowed for tests (360s) exceeded!
FAILED--expected 2 tests, saw 1

lib/Benchmark ..................................................
Illegal division by zero at ../lib/Benchmark.t line 77.
# Looks like your test exited with 255 just after 10.
FAILED--expected 213 tests, saw 10

t/op/time ...................................................... #
Failed test 2 - very basic times test at op/time.t line 33
FAILED at test 2

I am attaching two files which are essentially stripped-down versions of 2 of the 3 failing test files with some debugging statements thrown in.

Can you place 'xtime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/xtime.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./t/op/xtime.t
1..2
ok 1 - very basic time test
BINGO
beguser​: 0
nowuser​: 0.01
begsys​: 0
nowsys​: 0
i​: 2000001
ok 2 - very basic times test
#####

And can you then place 'xBenchmark.t' into 'lib/Benchmark' and then run​:

#####
./perl -Ilib ./lib/xBenchmark.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./lib/xBenchmark.t
1..10
ok 1 - 'Ensure we can create a benchmark object' isa 'Benchmark'
ok 2 - 'timeit CODEREF' isa 'Benchmark'
ok 3 - benchmarked code was run 5 times
ok 4 - 'timeit eval' isa 'Benchmark'
ok 5 - benchmarked code was run 5 times
ok 6 - benchmarked code called without arguments
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 3.22
$threesecs->[2]​: system 0
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 8820
ok 7 - 'countit 0, CODEREF' isa 'Benchmark'
ok 8 - benchmarked code was run
# in_threesecs=8820 iterations
ok 9 - iters returned positive iterations
ok 10 - 3s cpu3 is at least 3s
# in_threesecs_adj=8217.39130434783 adjusted iterations
#####
Attach output for each.

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From @jkeenan

#!./perl -w

BEGIN {
  chdir 't' if -d 't';
  @​INC = ('../lib');
}

use warnings;
use strict;
use vars qw($foo $bar $baz $ballast);
use Test​::More tests => 10;

use Benchmark qw(​:all);

my $DELTA = 0.4;

# Some timing ballast
sub fib {
  my $n = shift;
  return $n if $n < 2;
  fib($n-1) + fib($n-2);
}
$ballast = 15;

my $All_Pattern =
  qr/(\d+) +wallclock secs? +\( *(-?\d+\.\d\d) +usr +(-?\d+\.\d\d) +sys +\+ +(-?\d+\.\d\d) +cusr +(-?\d+\.\d\d) +csys += +(-?\d+\.\d\d) +CPU\)/;
my $Noc_Pattern =
  qr/(\d+) +wallclock secs? +\( *(-?\d+\.\d\d) +usr +\+ +(-?\d+\.\d\d) +sys += +(-?\d+\.\d\d) +CPU\)/;
my $Nop_Pattern =
  qr/(\d+) +wallclock secs? +\( *(-?\d+\.\d\d) +cusr +\+ +(-?\d+\.\d\d) +csys += +\d+\.\d\d +CPU\)/;
# Please don't trust the matching parentheses to be useful in this :-)
my $Default_Pattern = qr/$All_Pattern|$Noc_Pattern/;

# see if the ratio of two integer values is within (1+$delta)

sub cmp_delta {
  my ($min, $max, $delta) = @​_;
  ($min, $max) = ($max, $min) if $max < $min;
  return 0 if $min < 1; # avoid / 0
  return $max/$min <= (1+$delta);
}

my $t0 = new Benchmark;
isa_ok ($t0, 'Benchmark', "Ensure we can create a benchmark object");

# We use the benchmark object once we've done some work​:

isa_ok(timeit(5, sub {++$foo}), 'Benchmark', "timeit CODEREF");
is ($foo, 5, "benchmarked code was run 5 times");

isa_ok(timeit(5, '++$bar'), 'Benchmark', "timeit eval");
is ($bar, 5, "benchmarked code was run 5 times");

# is coderef called with spurious arguments?
timeit( 1, sub { $foo = @​_ });
is ($foo, 0, "benchmarked code called without arguments");

print "# Burning CPU to benchmark things; will take time...\n";

# We need to do something fairly slow in the coderef.
# Same coderef. Same place in memory.
my $coderef = sub {$baz += fib($ballast)};

# The default is three.
$baz = 0;
my $threesecs = countit(0, $coderef);
my @​elems = ( 'real', 'user', 'system', 'children_user', 'children_system', 'iters' );
for (0..$#$threesecs) { printf "%-17s%-17s%5s\n" => "\$threesecs->[$_]​:", $elems[$_], $threesecs->[$_]; }
die "Why is 'user' being reported as 0?" unless $threesecs->[1];
isa_ok($threesecs, 'Benchmark', "countit 0, CODEREF");
isnt ($baz, 0, "benchmarked code was run");
my $in_threesecs = $threesecs->iters;
print "# in_threesecs=$in_threesecs iterations\n";
cmp_ok($in_threesecs, '>', 0, "iters returned positive iterations");
my $cpu3 = $threesecs->[1]; # user
my $sys3 = $threesecs->[2]; # sys
cmp_ok($cpu3+$sys3, '>=', 3.0, "3s cpu3 is at least 3s");
my $in_threesecs_adj = $in_threesecs;
$in_threesecs_adj *= (3/$cpu3); # adjust because may not have run for exactly 3s
print "# in_threesecs_adj=$in_threesecs_adj adjusted iterations\n";

@p5pRT
Copy link
Author

p5pRT commented Jun 4, 2017

From @jkeenan

#!./perl -w

BEGIN {
  chdir 't' if -d 't';
  require './test.pl';
  set_up_inc('../lib');
}

plan tests => 2;

# These tests make sure, among other things, that we don't end up
# burning tons of CPU for dates far in the future.
# watchdog() makes sure that the test script eventually exits if
# the tests are triggering the failing behavior
watchdog(250);

($beguser,$begsys) = times;

$beg = time;

while (($now = time) == $beg) { sleep 1 }

ok($now > $beg && $now - $beg < 10, 'very basic time test');
my $x = "aaaa";
for ($i = 0; $i < 1_000_000; $i++) {
  for my $j (1..1000) { ++$x; }; # burn some user cycles
  ($nowuser, $nowsys) = times;
  #$i = 2_000_000 if $nowuser > $beguser && ( $nowsys >= $begsys ||
  # (!$nowsys && !$begsys));
  if ($nowuser > $beguser && ( $nowsys >= $begsys || (!$nowsys && !$begsys))) {
  print "BINGO\n";
  $i = 2_000_000;
  }
  last if time - $beg > 20;
}
print "beguser​: $beguser\n";
print "nowuser​: $nowuser\n";
print "begsys​: $begsys\n";
print "nowsys​: $nowsys\n";
print "i​: $i\n";

ok($i >= 2_000_000, 'very basic times test');

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From zarniwhoop@ntlworld.com

On Sun, Jun 04, 2017 at 07​:52​:40AM -0700, James E Keenan via RT wrote​:

I am attaching two files which are essentially stripped-down versions of 2 of the 3 failing test files with some debugging statements thrown in.

Can you place 'xtime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/xtime.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./t/op/xtime.t
1..2
ok 1 - very basic time test
BINGO
beguser​: 0
nowuser​: 0.01
begsys​: 0
nowsys​: 0
i​: 2000001
ok 2 - very basic times test
#####

And can you then place 'xBenchmark.t' into 'lib/Benchmark' and then run​:

#####
./perl -Ilib ./lib/xBenchmark.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./lib/xBenchmark.t
1..10
ok 1 - 'Ensure we can create a benchmark object' isa 'Benchmark'
ok 2 - 'timeit CODEREF' isa 'Benchmark'
ok 3 - benchmarked code was run 5 times
ok 4 - 'timeit eval' isa 'Benchmark'
ok 5 - benchmarked code was run 5 times
ok 6 - benchmarked code called without arguments
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 3.22
$threesecs->[2]​: system 0
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 8820
ok 7 - 'countit 0, CODEREF' isa 'Benchmark'
ok 8 - benchmarked code was run
# in_threesecs=8820 iterations
ok 9 - iters returned positive iterations
ok 10 - 3s cpu3 is at least 3s
# in_threesecs_adj=8217.39130434783 adjusted iterations
#####
Attach output for each.

Thank you very much.

As expected, my output for both is different, but also the
xBenchmark did not get as far.

Thanks.

ĸen
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From zarniwhoop@ntlworld.com

1..2
# Failed test 2 - very basic times test at ./t/op/xtime.t line 42
ok 1 - very basic time test
beguser​: 0
nowuser​: 0
begsys​: 0.01
nowsys​: 19.28
i​: 310036
not ok 2 - very basic times test

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From zarniwhoop@ntlworld.com

1..10
ok 1 - 'Ensure we can create a benchmark object' isa 'Benchmark'
ok 2 - 'timeit CODEREF' isa 'Benchmark'
ok 3 - benchmarked code was run 5 times
ok 4 - 'timeit eval' isa 'Benchmark'
ok 5 - benchmarked code was run 5 times
ok 6 - benchmarked code called without arguments
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 0
$threesecs->[2]​: system 3.11999999999989
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 6299
Why is 'user' being reported as 0? at ./lib/xBenchmark.t line 70.
# Looks like your test exited with 255 just after 6.

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From @jkeenan

On Mon, 05 Jun 2017 18​:19​:21 GMT, zarniwhoop@​ntlworld.com wrote​:

On Sun, Jun 04, 2017 at 07​:52​:40AM -0700, James E Keenan via RT wrote​:

I am attaching two files which are essentially stripped-down versions
of 2 of the 3 failing test files with some debugging statements
thrown in.

So let's first focus on the problem with Benchmark.

And can you then place 'xBenchmark.t' into 'lib/Benchmark' and then
run​:

#####
./perl -Ilib ./lib/xBenchmark.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./lib/xBenchmark.t
# trim output
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 3.22
$threesecs->[2]​: system 0
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 8820
# trim output; file completes successfully

In what is presumably a normal environment, the entirety of 'real' is allocated to 'user' (element 1).

You got​:

#####
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 0
$threesecs->[2]​: system 3.11999999999989
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 6299
Why is 'user' being reported as 0? at ./lib/xBenchmark.t line 70.
# Looks like your test exited with 255 just after 6.
#####

So, as I suspected, in your environment Benchmark is allocating the entirety of 'real' (element 0) to 'system' (element 2). 'user' is 0 which, after being assigned to $cpu3, leads to the division-by-zero error at +77 lib/Benchmark.t.

Is this attributable to the fact that you are in a 'chroot' context? People more knowledgeable than I will have to answer that question.

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From @jkeenan

On Mon, 05 Jun 2017 18​:19​:21 GMT, zarniwhoop@​ntlworld.com wrote​:

On Sun, Jun 04, 2017 at 07​:52​:40AM -0700, James E Keenan via RT wrote​:

I am attaching two files which are essentially stripped-down versions
of 2 of the 3 failing test files with some debugging statements
thrown in.

Can you place 'xtime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/xtime.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./t/op/xtime.t
1..2
ok 1 - very basic time test
BINGO
beguser​: 0
nowuser​: 0.01
begsys​: 0
nowsys​: 0
i​: 2000001
ok 2 - very basic times test
#####

And can you then place 'xBenchmark.t' into 'lib/Benchmark' and then
run​:

#####
./perl -Ilib ./lib/xBenchmark.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./lib/xBenchmark.t
1..10
ok 1 - 'Ensure we can create a benchmark object' isa 'Benchmark'
ok 2 - 'timeit CODEREF' isa 'Benchmark'
ok 3 - benchmarked code was run 5 times
ok 4 - 'timeit eval' isa 'Benchmark'
ok 5 - benchmarked code was run 5 times
ok 6 - benchmarked code called without arguments
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 3.22
$threesecs->[2]​: system 0
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 8820
ok 7 - 'countit 0, CODEREF' isa 'Benchmark'
ok 8 - benchmarked code was run
# in_threesecs=8820 iterations
ok 9 - iters returned positive iterations
ok 10 - 3s cpu3 is at least 3s
# in_threesecs_adj=8217.39130434783 adjusted iterations
#####
Attach output for each.

Thank you very much.

As expected, my output for both is different, but also the
xBenchmark did not get as far.

Thanks.

ĸen

Can you place new attachment 'ytime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/ytime.t
#####

... and attach output (preferably, with '.txt' file extension so that it shows up well in RT).

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From @jkeenan

#!./perl -w

BEGIN {
  chdir 't' if -d 't';
  require './test.pl';
  set_up_inc('../lib');
}

plan tests => 2;

# These tests make sure, among other things, that we don't end up
# burning tons of CPU for dates far in the future.
# watchdog() makes sure that the test script eventually exits if
# the tests are triggering the failing behavior
watchdog(250);

($beguser,$begsys) = times;

$beg = time;

while (($now = time) == $beg) { sleep 1 }

ok($now > $beg && $now - $beg < 10, 'very basic time test');
my $x = "aaaa";
for ($i = 0; $i < 1_000_000; $i++) {
  for my $j (1..1000) { ++$x; }; # burn some user cycles
  ($nowuser, $nowsys) = times;
  if ($nowuser > $beguser && ( $nowsys >= $begsys || (!$nowsys && !$begsys))) {
  print "BINGO\n";
  $i = 2_000_000;
  }
  my $k = time - $beg;
  if ($k > 20) {
  print "STINGO​: $k is greater than 20 seconds\n";
  last;
  }
}
print "beguser​: $beguser\n";
print "nowuser​: $nowuser\n";
print "begsys​: $begsys\n";
print "nowsys​: $nowsys\n";
print "i​: $i\n";

ok($i >= 2_000_000, 'very basic times test');

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From zarniwhoop@ntlworld.com

On Mon, Jun 05, 2017 at 12​:26​:39PM -0700, James E Keenan via RT wrote​:

On Mon, 05 Jun 2017 18​:19​:21 GMT, zarniwhoop@​ntlworld.com wrote​:

On Sun, Jun 04, 2017 at 07​:52​:40AM -0700, James E Keenan via RT wrote​:

I am attaching two files which are essentially stripped-down versions
of 2 of the 3 failing test files with some debugging statements
thrown in.

So let's first focus on the problem with Benchmark.

And can you then place 'xBenchmark.t' into 'lib/Benchmark' and then
run​:

#####
./perl -Ilib ./lib/xBenchmark.t
#####

My output for that would be​:

#####
$ ./perl -Ilib ./lib/xBenchmark.t
# trim output
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 3.22
$threesecs->[2]​: system 0
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 8820
# trim output; file completes successfully

In what is presumably a normal environment, the entirety of 'real' is allocated to 'user' (element 1).

Just to confirm, yes this is a normal environment on the completed
system.

You got​:

#####
# Burning CPU to benchmark things; will take time...
$threesecs->[0]​: real 3
$threesecs->[1]​: user 0
$threesecs->[2]​: system 3.11999999999989
$threesecs->[3]​: children_user 0
$threesecs->[4]​: children_system 0
$threesecs->[5]​: iters 6299
Why is 'user' being reported as 0? at ./lib/xBenchmark.t line 70.
# Looks like your test exited with 255 just after 6.
#####

So, as I suspected, in your environment Benchmark is allocating the entirety of 'real' (element 0) to 'system' (element 2). 'user' is 0 which, after being assigned to $cpu3, leads to the division-by-zero error at +77 lib/Benchmark.t.

Is this attributable to the fact that you are in a 'chroot' context? People more knowledgeable than I will have to answer that question.

Thank you very much.

But I wasn't in chroot when I ran the two debug versions you sent
me.

ĸen
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 5, 2017

From zarniwhoop@ntlworld.com

On Mon, Jun 05, 2017 at 12​:39​:56PM -0700, James E Keenan via RT wrote​:

Can you place new attachment 'ytime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/ytime.t
#####

... and attach output (preferably, with '.txt' file extension so that it shows up well in RT).

Thank you very much.

I'll do it tomorrow, at the moment the system is working through a
manual build of the modules pulled in by biber and its dependencies.

ĸen
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 6, 2017

From @iabyn

On Mon, Jun 05, 2017 at 11​:59​:40PM +0100, Ken Moffat wrote​:

But I wasn't in chroot when I ran the two debug versions you sent

I suspect its more to do with how your kernel is reporting user and sys
time. Could you create the following very simple script and run it using
the OS's 'time' command (adjust the loop count if necessary so it runs for
a few seconds)​:

  @​t0 = times;
  1 for 1..200_000_000;
  @​t1 = times;
  printf "%.3f usr, %.3f sys, %.3f cusr, %.3f csys\n",
  map $t1[$_] - $t0[$_], 0..3;

on my system I see​:

  $ time perl5260o /tmp/foo
  3.350 usr, 0.000 sys, 0.000 cusr, 0.000 csys

  real 0m3.361s
  user 0m3.357s
  sys 0m0.004s
  $

--
This email is confidential, and now that you have read it you are legally
obliged to shoot yourself. Or shoot a lawyer, if you prefer. If you have
received this email in error, place it in its original wrapping and return
for a full refund. By opening this email, you accept that Elvis lives.

@p5pRT
Copy link
Author

p5pRT commented Jun 6, 2017

From zarniwhoop@ntlworld.com

On Tue, Jun 06, 2017 at 12​:57​:10AM -0700, Dave Mitchell via RT wrote​:

On Mon, Jun 05, 2017 at 11​:59​:40PM +0100, Ken Moffat wrote​:

But I wasn't in chroot when I ran the two debug versions you sent

I suspect its more to do with how your kernel is reporting user and sys
time. Could you create the following very simple script and run it using
the OS's 'time' command (adjust the loop count if necessary so it runs for
a few seconds)​:

@&#8203;t0 = times;
1 for 1\.\.200\_000\_000;
@&#8203;t1 = times;
printf "%\.3f usr\, %\.3f sys\, %\.3f cusr\, %\.3f csys\\n"\,
    map $t1\[$\_\] \- $t0\[$\_\]\, 0\.\.3;

on my system I see​:

$ time perl5260o /tmp/foo
3\.350 usr\, 0\.000 sys\, 0\.000 cusr\, 0\.000 csys

real    0m3\.361s
user    0m3\.357s
sys    0m0\.004s
$

On my haswell I got similar results, but a bit quicker.
But on the Kaveri, even after doubling the loop count,
user time is zero.

With 200_000_000

0.000 usr, 4.020 sys, 0.000 cusr, 0.000 csys

real 0m4.033s
user 0m0.000s
sys 0m4.025s

and with 400_000_000

0.000 usr, 8.010 sys, 0.000 cusr, 0.000 csys

real 0m8.014s
user 0m0.001s
sys 0m8.012s

Doubling it again, perl continues to put all the time in sys, and
bash's time command is back to 0 for user.

Does that sound like a kernel config issue ? Apart from very
different kernel configs, and perl-5.26.0 vs 5.24.1 on the haswell,
the versions of everything in the base LFS systems are identical.

Thanks
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 6, 2017

From zarniwhoop@ntlworld.com

On Mon, Jun 05, 2017 at 12​:39​:56PM -0700, James E Keenan via RT wrote​:

Can you place new attachment 'ytime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/ytime.t
#####

... and attach output (preferably, with '.txt' file extension so that it shows up well in RT).

Thank you very much.

Attached. I misthought this was the very slow test, so I left it
running while I went out.

Thanks
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 6, 2017

From zarniwhoop@ntlworld.com

1..2
# Failed test 2 - very basic times test at ./t/op/ytime.t line 44
ok 1 - very basic time test
STINGO​: 21 is greater than 20 seconds
beguser​: 0
nowuser​: 0
begsys​: 0.01
nowsys​: 19.86
i​: 322763
not ok 2 - very basic times test

@p5pRT
Copy link
Author

p5pRT commented Jun 6, 2017

From @jkeenan

On Tue, 06 Jun 2017 20​:48​:51 GMT, zarniwhoop@​ntlworld.com wrote​:

On Mon, Jun 05, 2017 at 12​:39​:56PM -0700, James E Keenan via RT wrote​:

Can you place new attachment 'ytime.t' into 't/op' and then run​:

#####
./perl -Ilib ./t/op/ytime.t
#####

... and attach output (preferably, with '.txt' file extension so that
it shows up well in RT).

Thank you very much.

Attached. I misthought this was the very slow test, so I left it
running while I went out.

Thanks

So you got​:

#####
ok 1 - very basic time test
STINGO​: 21 is greater than 20 seconds
beguser​: 0
nowuser​: 0
begsys​: 0.01
nowsys​: 19.86
i​: 322763
not ok 2 - very basic times test
#####

... which suggests that your environment is very slow, at least relative to the environments in which we customarily test Perl. (But I think we knew that already.)

Also, as was the case with lib/Benchmark.t, the Perl built-in times() function is allocating all the time to 'system' rather than to 'user'. We don't yet understand why that is happening.

Thank you very much.
--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Author

p5pRT commented Jun 7, 2017

From @iabyn

On Tue, Jun 06, 2017 at 07​:32​:13PM +0100, Ken Moffat wrote​:

0.000 usr, 8.010 sys, 0.000 cusr, 0.000 csys

real 0m8.014s
user 0m0.001s
sys 0m8.012s

Doubling it again, perl continues to put all the time in sys, and
bash's time command is back to 0 for user.

Does that sound like a kernel config issue ? Apart from very
different kernel configs, and perl-5.26.0 vs 5.24.1 on the haswell,
the versions of everything in the base LFS systems are identical.

Well its something to do with the kernel or low-level C library.
The fact that your OS's 'times' utility is reporting all CPU usage as sys
rather usr means it definitely isn't an issue with perl.

Unless you can provide a valid use case why perl's time-related modules
and test scripts should be adapted to work (or at least fail gracefully)
on a system which fails to correctly report usr vs sys CPU, I'll close
this ticket.

--
A problem shared is a problem doubled.

@p5pRT
Copy link
Author

p5pRT commented Jun 7, 2017

From zarniwhoop@ntlworld.com

On Wed, Jun 07, 2017 at 02​:03​:58PM +0100, Dave Mitchell wrote​:

On Tue, Jun 06, 2017 at 07​:32​:13PM +0100, Ken Moffat wrote​:

0.000 usr, 8.010 sys, 0.000 cusr, 0.000 csys

real 0m8.014s
user 0m0.001s
sys 0m8.012s

Doubling it again, perl continues to put all the time in sys, and
bash's time command is back to 0 for user.

Does that sound like a kernel config issue ? Apart from very
different kernel configs, and perl-5.26.0 vs 5.24.1 on the haswell,
the versions of everything in the base LFS systems are identical.

Well its something to do with the kernel or low-level C library.
The fact that your OS's 'times' utility is reporting all CPU usage as sys
rather usr means it definitely isn't an issue with perl.

Unless you can provide a valid use case why perl's time-related modules
and test scripts should be adapted to work (or at least fail gracefully)
on a system which fails to correctly report usr vs sys CPU, I'll close
this ticket.

Fair enough. Unfortunately my google-fu is not up to this, looks
like I'll need to go through the kernel differences line-by-line.

Thanks to you both for the help.

--
A problem shared is a problem doubled.

LOL. I like your random .sig generator.

ĸen
--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 8, 2017

From zarniwhoop@ntlworld.com

On Wed, Jun 07, 2017 at 06​:04​:51AM -0700, Dave Mitchell via RT wrote​:

A follow-up, hoping this gets into the ticket so that maybe google
can find it for anyone who makes the same error​:

This is my second A10 Kaveri, it replaces one which died last year
(and took down some other kit too - don't ya just love consumer
hardware ?). Before that, at some point I had been testing -rc
linux kernels and at one point I got seduced by the "new, shiny"
tickless options (at that time, I think ther kernel help was
incomplete). That turned out to be a bad move (only using one
core!) and I reverted it. But it seems I didn't revert all of it.

The lingering bad settings were CONFIG_NO_HZ_FULL : changing that to
_IDLE gave me a little userspace accounting, but not a lot, and

CONFIG_VIRT_CPU_ACCOUNTING{,_GEN} - changing that to
CONFIG_TICK_CPU_ACCOUNTING made my box _much_ happier​: the tests now
complete much less slowly, and I don't have the extraneous errors.

Thanks again for the help, and sorry for the misplaced noise.

--
I live in a city. I know sparrows from starlings. After that
everything is a duck as far as I'm concerned. -- Monstrous Regiment

@p5pRT
Copy link
Author

p5pRT commented Jun 8, 2017

From @khwilliamson

Thanks for considering those who come after you.

It did make it into the ticket, which I am now closing
--
Karl Williamson

@p5pRT
Copy link
Author

p5pRT commented Jun 8, 2017

@khwilliamson - Status changed from 'open' to 'rejected'

@p5pRT p5pRT closed this as completed Jun 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant