Discussion:
Request for volunteers
(too old to reply)
Rich Felker
2013-06-30 05:52:02 UTC
Permalink
Hi all,

With us nearing musl 1.0, there are a lot of things I could use some
help with. Here are a few specific tasks/roles I'm looking for:

1. Put together a list of relevant conferences one or more of us could
attend in the next 4-6 months. I'm in the US and travelling outside
the country would probably be prohibitive unless we also find
funding, but anything in the US is pretty easy for me to get to,
and other people involved in the project could perhaps attend
other conferences outside the US.

2. Organize patches from sabotage, musl-cross, musl pkgsrc, etc. into
suitable form for upstream, and drafting appropriate emails or bug
reports to send.

3. Check status of musl support with build systems and distros
possibly adopting it as an option or their main libc. This would
include OpenWRT, Aboriginal, crosstool-ng, buildroot, Alpine Linux,
Gentoo, etc. If their people doing the support seem to have gotten
stuck or need help, offer assistance. Make sure the wiki is kept
updated with info on other projects using musl we we can send folks
their way too.

4. Wikimastering. The wiki could use a lot of organizational
improvement, additional information, and monitoring for outdated
information that needs to be updated or removed.

5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.

Anyone up for volunteering? :-)

Rich
Szabolcs Nagy
2013-06-30 10:48:35 UTC
Permalink
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
i can look into this one
Rich Felker
2013-07-01 03:51:17 UTC
Permalink
Post by Szabolcs Nagy
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
i can look into this one
Excellent. If anybody on the team right now is the perfect person for
this task, it's you; I just wasn't clear on whether you'd have any
hope of having time for it.

If it's borderline whether you would or not, perhaps we could find
someone else to be a "research assistant" for the testing project.
This could involve tasks like:

- Reading the git log and making a list of noteworthy bugs to add
regression tests for.

- Determining (perhaps via automated coverage tools) major code we
lack any testing for.

- Cross-checking glibc bug tracker and/or glibc tests for issues that
should be checked in musl too.

Rich
Szabolcs Nagy
2013-07-01 20:58:57 UTC
Permalink
Post by Rich Felker
- Reading the git log and making a list of noteworthy bugs to add
regression tests for.
- Determining (perhaps via automated coverage tools) major code we
lack any testing for.
- Cross-checking glibc bug tracker and/or glibc tests for issues that
should be checked in musl too.
i was thinking about this and a few category of tests:

functional:
black box testing of libc interfaces
(eg input-output test vectors)
tries to achieve good coverage
regression:
tests for bugs we found
sometimes the bugs are musl specific or arch specific so
i think these are worth keeping separately from the general
functional tests (eg same repo but different dir)
static:
testing without executing code
eg check the symbols and types in headers
(i have tests that cover all posix headers only using cc)
may be the binaries can be checked in some way as well
static-src:
code analyzis can be done on the source of musl
(eg sparse, cppcheck, clang-analyzer were tried earlier)
(this is different from the other tests and probably should
be set up separately)
metrics:
benchmarks and quality of implementation metrics
(eg performance, memory usage the usual checks but even
ulp error measurements may be in this category)

other tools:
a coverage tool would be useful, i'm not sure if anyone
set up one with musl yet (it's not just good for testing,
but also for determining what interfaces are actually in
use in the libc)

clever fuzzer tools would be nice as well, but i don't know
anything that can be used directly on a library (maybe with
small effort they can be used to get better coverage)

as a first step i guess we need to start with the functional
and regression tests

design goals of the test system:
- tests should be easy to run even a single test in isolation
(so test should be self contained if possible)
- output is a report, failure cause should be clear
- the system should not have external dependencies
(other than libc, posix sh, gnu make: so tests are in .c files with
simple buildsystem or .sh wrapper)
- the failure of one test should not interfere with other tests
(so tests should be in separate .c files each with main() and
narrow scope, otherwise a build failure can affect a lot of tests)
- the test system should run on all archs
(so arch specific and implementation defined things should be treated
carefully)
- the test results should be robust
(failures are always reported, deterministically if possible)
- tests should leave the system in clean state
(or easily cleanable state)

some difficulties:
- some "test framework" functionality would be nice, but can be
problematic: eg using nice error reporting function on top of stdio
may cause loss of info because of buffering in case of a crash
- special compiler or linker flags (can be maintained in makefile
or in the .c files as comments)
- tests may require special environment, filesystem access, etc
i'm not sure what's the best way to manage that
(and some tests may need two different uid or other capabilities)
- i looked at the bug history and many bugs are in hard to
trigger cornercases (eg various races) or internally invoke ub
in a way that may be hard to verify in a robust way
- some tests may need significant support code to achieve good
coverage (printf, math, string handling close to 2G,..)
(in such cases we can go with simple self-contained tests without
much coverage, but easy maintainance, or with something
sophisticated)

does that sound right?

i think i can reorganize my libc tests to be more "scalabe"
in these directions..
Rich Felker
2013-07-01 23:59:55 UTC
Permalink
Post by Szabolcs Nagy
black box testing of libc interfaces
(eg input-output test vectors)
tries to achieve good coverage
tests for bugs we found
sometimes the bugs are musl specific or arch specific so
i think these are worth keeping separately from the general
functional tests (eg same repo but different dir)
testing without executing code
eg check the symbols and types in headers
(i have tests that cover all posix headers only using cc)
may be the binaries can be checked in some way as well
code analyzis can be done on the source of musl
(eg sparse, cppcheck, clang-analyzer were tried earlier)
(this is different from the other tests and probably should
be set up separately)
benchmarks and quality of implementation metrics
(eg performance, memory usage the usual checks but even
ulp error measurements may be in this category)
One thing we could definitely measure here is "given a program that
just uses interface X, how much crap gets pulled in when static
linking?" I think this could easily be automated to cover tons of
interfaces, but in the interest of signal-to-noise ratio, I think we
would want to manually select interesting interfaces to have it
performed on.
Post by Szabolcs Nagy
a coverage tool would be useful, i'm not sure if anyone
set up one with musl yet (it's not just good for testing,
but also for determining what interfaces are actually in
use in the libc)
Yes. Coverage from real-world apps can also tell us which interfaces
need tests.
Post by Szabolcs Nagy
clever fuzzer tools would be nice as well, but i don't know
anything that can be used directly on a library (maybe with
small effort they can be used to get better coverage)
Yes, automatically generating meaningful inputs that meet the
interface contracts is non-trivial.
Post by Szabolcs Nagy
as a first step i guess we need to start with the functional
and regression tests
Agreed, these are the highest priority.
Post by Szabolcs Nagy
- tests should be easy to run even a single test in isolation
(so test should be self contained if possible)
Agreed. This is particularly important when trying to fix something
that broke a test, or when testing a new port (since it may be hard to
get the port working enough to test anything if failure of one test
prevents seeing the results of others).
Post by Szabolcs Nagy
- output is a report, failure cause should be clear
This would be really nice.
Post by Szabolcs Nagy
- the system should not have external dependencies
(other than libc, posix sh, gnu make: so tests are in .c files with
simple buildsystem or .sh wrapper)
Agreed.
Post by Szabolcs Nagy
- the failure of one test should not interfere with other tests
(so tests should be in separate .c files each with main() and
narrow scope, otherwise a build failure can affect a lot of tests)
How do you delineate what constitutes a single test? For example, we
have hundreds of test cases for scanf, and it seems silly for each
input/format combination to be a separate .c file. On the other hand,
my current scanf tests automatically test both byte and wide versions
of both string and stdio versions of scanf; it may be desirable in
principle to separate these into 4 separate files.

My naive feeling would be that deciding "how much can go in one test"
is not a simple rule we can follow, but requires considering what's
being tested, how "low-level" it is, and whether the expected failures
might interfere with other tests. For instance a test that's looking
for out-of-bounds accesses would not be a candidate for doing a lot in
a single test file, but a test that's merely looking for correct
parsing could possibly get away with testing lots of assertions in a
single file.
Post by Szabolcs Nagy
- the test system should run on all archs
(so arch specific and implementation defined things should be treated
carefully)
It should also run on all libcs, I think, with tests for unsupported
functionality possibly failing at build time.
Post by Szabolcs Nagy
- the test results should be robust
(failures are always reported, deterministically if possible)
I would merely add that this is part of the requirement of minimal
dependency. For example, if you have a fancy test framework that uses
stdio and malloc all over the place in the same process as the test,
it's pretty hard to test stdio and malloc robustly...
Post by Szabolcs Nagy
- tests should leave the system in clean state
(or easily cleanable state)
Yes, I think this mainly pertains to temp files, named POSIX IPC or
XSI IPC objects, etc.
Post by Szabolcs Nagy
- some "test framework" functionality would be nice, but can be
problematic: eg using nice error reporting function on top of stdio
may cause loss of info because of buffering in case of a crash
I think any fancy "framework" stuff could be purely in the controlling
and reporting layer, outside the address space of the actual tests. We
may however need a good way for the test to communicate its results to
the framework...
Post by Szabolcs Nagy
- special compiler or linker flags (can be maintained in makefile
or in the .c files as comments)
One thing that comes to mind where tests may need a lot of "build
system" help is testing the dynamic linker.
Post by Szabolcs Nagy
- tests may require special environment, filesystem access, etc
i'm not sure what's the best way to manage that
(and some tests may need two different uid or other capabilities)
If possible, I think we should make such tests use Linux containers,
so that they can be tested without elevated privileges. I'm not
experienced with containers, but my feeling is that this is the only
reasonable way to get a controlled environment for tests that need
that sort of thing, without having the user/admin do a lot of sketchy
stuff to prepare for a test.

Fortunately these are probably low-priority and could be deferred
until later.
Post by Szabolcs Nagy
- i looked at the bug history and many bugs are in hard to
trigger cornercases (eg various races) or internally invoke ub
in a way that may be hard to verify in a robust way
Test cases for race conditions make one of the most interesting types
of test writing. :-) The main key is that you need to have around a
copy of the buggy version to test against. Such tests would not have
FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
FAIL. :-)
Post by Szabolcs Nagy
- some tests may need significant support code to achieve good
coverage (printf, math, string handling close to 2G,..)
(in such cases we can go with simple self-contained tests without
much coverage, but easy maintainance, or with something
sophisticated)
I don't follow.
Post by Szabolcs Nagy
does that sound right?
i think i can reorganize my libc tests to be more "scalabe"
in these directions..
:)

Rich
Szabolcs Nagy
2013-07-02 02:19:37 UTC
Permalink
Post by Rich Felker
Post by Szabolcs Nagy
- the failure of one test should not interfere with other tests
(so tests should be in separate .c files each with main() and
narrow scope, otherwise a build failure can affect a lot of tests)
How do you delineate what constitutes a single test? For example, we
have hundreds of test cases for scanf, and it seems silly for each
input/format combination to be a separate .c file. On the other hand,
my current scanf tests automatically test both byte and wide versions
of both string and stdio versions of scanf; it may be desirable in
principle to separate these into 4 separate files.
My naive feeling would be that deciding "how much can go in one test"
is not a simple rule we can follow, but requires considering what's
being tested, how "low-level" it is, and whether the expected failures
might interfere with other tests. For instance a test that's looking
for out-of-bounds accesses would not be a candidate for doing a lot in
a single test file, but a test that's merely looking for correct
parsing could possibly get away with testing lots of assertions in a
single file.
yes the boundary is not clear, but eg the current pthread
test does too many kinds of things in one file

if the 'hundreds of test cases' can be represented as
a simple array of test vectors then that should go into
one file

if many functions want to use the same test vectors then
at some point it's worth moving the vectors out to a
header file and write separate tests for the different
functions
Post by Rich Felker
Post by Szabolcs Nagy
- some "test framework" functionality would be nice, but can be
problematic: eg using nice error reporting function on top of stdio
may cause loss of info because of buffering in case of a crash
I think any fancy "framework" stuff could be purely in the controlling
and reporting layer, outside the address space of the actual tests. We
may however need a good way for the test to communicate its results to
the framework...
the simple approach is to make each test a standalone process that
exits with 0 on success

in the failure case it can use dprintf to print error messages to
stdout and the test system collects the exit status and the messages
Post by Rich Felker
Post by Szabolcs Nagy
- special compiler or linker flags (can be maintained in makefile
or in the .c files as comments)
One thing that comes to mind where tests may need a lot of "build
system" help is testing the dynamic linker.
yes

and we need to compile with -lpthread -lm -lrt -l...
if the tests should work on other libcs

my current solution is using wildcard rules for building
*_dso.c into .so and *.c into executables and then
add extra rules and target specific make variables:

foo: LDFLAGS+=-ldl -rdynamic
foo: foo_dso.so

the other solution i've seen is to put all the build commands
into the .c file as comments:

//RUN cc -c -o $name.o $name.c
//RUN cc -o $name $name.o
...

and use simple shell scripts as the build system
(dependencies are harder to track this way, but the tests
are more self-contained)
Post by Rich Felker
Post by Szabolcs Nagy
- tests may require special environment, filesystem access, etc
i'm not sure what's the best way to manage that
(and some tests may need two different uid or other capabilities)
If possible, I think we should make such tests use Linux containers,
so that they can be tested without elevated privileges. I'm not
experienced with containers, but my feeling is that this is the only
reasonable way to get a controlled environment for tests that need
that sort of thing, without having the user/admin do a lot of sketchy
stuff to prepare for a test.
Fortunately these are probably low-priority and could be deferred
until later.
ok, skip these for now
Post by Rich Felker
Post by Szabolcs Nagy
- i looked at the bug history and many bugs are in hard to
trigger cornercases (eg various races) or internally invoke ub
in a way that may be hard to verify in a robust way
Test cases for race conditions make one of the most interesting types
of test writing. :-) The main key is that you need to have around a
copy of the buggy version to test against. Such tests would not have
FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
FAIL. :-)
hm we can introduce a third result for tests that try to trigger
some bug but are not guaranteed to do so
(eg failed,passed,inconclusive)
but probably that's more confusing than useful
Post by Rich Felker
Post by Szabolcs Nagy
- some tests may need significant support code to achieve good
coverage (printf, math, string handling close to 2G,..)
(in such cases we can go with simple self-contained tests without
much coverage, but easy maintainance, or with something
sophisticated)
I don't follow.
i mean for many small functions there is not much difference between
a simple sanity check and full coverage (eg basename can be thoroughly
tested by about 10 input-output pairs)

but there can be a huge difference: eg detailed testing of getaddrinfo
requires non-trivial setup with dns server etc, it's much easier to do
some sanity checks like gnulib would do, or a different example is
rand: a real test would be like the diehard test suit while the sanity
check is trivial

so i'm not sure how much engineering should go into the tests:
go for a small maintainable set that touch as many areas in libc
as possible, or go for extensive coverage and develop various tools
and libs that help setting up the environment or generate large set
of test cases (eg my current math tests are closer to this later one)

if the goal is to execute the test-suit as a post-commit hook
then there should be a reasonable limit on resource usage, build and
execution time etc and this limit affects how the code may be
organized, how errors are reported..
(most test systems i've seen are for simple unit tests: they allow
checking a few constraints and then report errors in a nice way,
however in case of libc i'd assume that you want to enumerate the
weird corner-cases to find bugs more effectively)
Rich Felker
2013-07-02 07:49:20 UTC
Permalink
Post by Szabolcs Nagy
Post by Rich Felker
My naive feeling would be that deciding "how much can go in one test"
is not a simple rule we can follow, but requires considering what's
being tested, how "low-level" it is, and whether the expected failures
might interfere with other tests. For instance a test that's looking
for out-of-bounds accesses would not be a candidate for doing a lot in
a single test file, but a test that's merely looking for correct
parsing could possibly get away with testing lots of assertions in a
single file.
yes the boundary is not clear, but eg the current pthread
test does too many kinds of things in one file
I agree completely. By the way, your mentioning pthread tests reminds
me that we need a reliable way to fail tests that have deadlocked or
otherwise hung. The standard "let it run for N seconds then kill it"
approach is rather uncivilized. I wonder if we could come up with a
nice way with a mix of realtime and cputime timers to observe complete
lack of forward progress.
Post by Szabolcs Nagy
if the 'hundreds of test cases' can be represented as
a simple array of test vectors then that should go into
one file
if many functions want to use the same test vectors then
at some point it's worth moving the vectors out to a
header file and write separate tests for the different
functions
Indeed, that is probably the way I should have factored my scanf
tests, but there is something to be said for getting the 4 errors for
the 4 functions with the same vector collated together in the output.
Post by Szabolcs Nagy
Post by Rich Felker
I think any fancy "framework" stuff could be purely in the controlling
and reporting layer, outside the address space of the actual tests. We
may however need a good way for the test to communicate its results to
the framework...
the simple approach is to make each test a standalone process that
exits with 0 on success
in the failure case it can use dprintf to print error messages to
stdout and the test system collects the exit status and the messages
Agreed. And if any test is trying to avoid stdio entirely, it can use
write() directly to generate the output.
Post by Szabolcs Nagy
Post by Rich Felker
One thing that comes to mind where tests may need a lot of "build
system" help is testing the dynamic linker.
yes
and we need to compile with -lpthread -lm -lrt -l...
if the tests should work on other libcs
Indeed. If we were testing other libcs, we might even want to run some
non-multithreaded tests with and without -lpthread in case the
override symbols in libpthread break something. Of course that can't
happen for musl; the equivalent test for musl would be static linking
and including or excluding references to certain otherwise-irrelevant
functions that might affect which version of another function gets
linked.

BTW, to use or not to use -static is also a big place we need build
system help.
Post by Szabolcs Nagy
my current solution is using wildcard rules for building
*_dso.c into .so and *.c into executables and then
foo: LDFLAGS+=-ldl -rdynamic
foo: foo_dso.so
I wasn't aware of this makefile trick to customizer flags for
different files. This could be very useful for customized optimization
levels in musl:

ifdef $(OPTIMIZE)
$(OPTIMIZE_OBJS) $(OPTIMIZE_OBJS:%.o=%.lo): CFLAGS+=-O3
endif
Post by Szabolcs Nagy
the other solution i've seen is to put all the build commands
//RUN cc -c -o $name.o $name.c
//RUN cc -o $name $name.o
....
and use simple shell scripts as the build system
(dependencies are harder to track this way, but the tests
are more self-contained)
What about a mix? Have the makefile include another makefile fragment
with a rule to generate that fragment, where the fragment is generated
from comments in the source files. Then you have full dependency
tracking via make, and self-contained tests.
Post by Szabolcs Nagy
Post by Rich Felker
Post by Szabolcs Nagy
- i looked at the bug history and many bugs are in hard to
trigger cornercases (eg various races) or internally invoke ub
in a way that may be hard to verify in a robust way
Test cases for race conditions make one of the most interesting types
of test writing. :-) The main key is that you need to have around a
copy of the buggy version to test against. Such tests would not have
FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
FAIL. :-)
hm we can introduce a third result for tests that try to trigger
some bug but are not guaranteed to do so
(eg failed,passed,inconclusive)
but probably that's more confusing than useful
Are you aware of any such cases?
Post by Szabolcs Nagy
Post by Rich Felker
Post by Szabolcs Nagy
- some tests may need significant support code to achieve good
coverage (printf, math, string handling close to 2G,..)
(in such cases we can go with simple self-contained tests without
much coverage, but easy maintainance, or with something
sophisticated)
I don't follow.
i mean for many small functions there is not much difference between
a simple sanity check and full coverage (eg basename can be thoroughly
tested by about 10 input-output pairs)
but there can be a huge difference: eg detailed testing of getaddrinfo
requires non-trivial setup with dns server etc, it's much easier to do
some sanity checks like gnulib would do, or a different example is
rand: a real test would be like the diehard test suit while the sanity
check is trivial
By the way, getaddrinfo (the dns resolver core of it) had a nasty bug
at one point in the past that randomly smashed the stack based on the
timing of dns responses. This would be a particularly hard thing to
test, but if we do eventually want to have regression tests for
timing-based bugs, it might make sense to use debuglib
(https://github.com/rofl0r/debuglib) and set breakpoints at key
functions to control the timing.
Post by Szabolcs Nagy
go for a small maintainable set that touch as many areas in libc
as possible, or go for extensive coverage and develop various tools
and libs that help setting up the environment or generate large set
of test cases (eg my current math tests are closer to this later one)
I think something in between is what we should aim for, tuned for
where we expect to find bugs that matter. For functions like printf,
scanf, strtol, etc. that have a lot of complex logic and exact
behavior they must deliver or risk introducing serious application
bugs, high coverage is critical to delivering a libc we can be
confident in. But for other functions, a simple sanity check migh
suffice. Sanity checks are very useful for new ports, since failure to
pass can quickly show that we have syscall conventions or struct
definitions wrong, alignment bugs, bad asm, etc. They would also
probably have caught the recent embarassing mbsrtowcs bug I fixed.

Here are some exhaustive tests we could easily perform:

- rand_r: period and bias
- all multibyte to wide operations: each valid UTF-8 character and
each invalid prefix. for functions that behave differently based on
whether output pointer is null, testing both ways.
- all wide to multibyte functions: each valid and invald wchar_t.

And some functions that would probably be fine with just sanity
checks:

- dirent interfaces
- network address conversions
- basename/dirname
- signal operations
- search interfaces

And things that can't be tested exhaustively but which I would think
need serious tests:

- stdio (various combinations of buffering, use of unget buffer, scanf
pushback, seeking, file position, flushing, switching
reading/writing, eof and error flags, ...)
- AIO (it will probably fail now tho)
- threads (synchronization primitives, cancellation, TSD dtors, ...)
- regex (sanity-check all features, longest-match rule, ...)
- fnmatch, glob, and wordexp
- string functions
Post by Szabolcs Nagy
if the goal is to execute the test-suit as a post-commit hook
I think that's a little too resource-heavy for a full test, but
perhaps reasonable for a subset of tests.
Post by Szabolcs Nagy
then there should be a reasonable limit on resource usage, build and
execution time etc and this limit affects how the code may be
organized, how errors are reported..
(most test systems i've seen are for simple unit tests: they allow
checking a few constraints and then report errors in a nice way,
however in case of libc i'd assume that you want to enumerate the
weird corner-cases to find bugs more effectively)
Yes, I think so.

Rich
Szabolcs Nagy
2013-07-16 16:20:31 UTC
Permalink
Post by Rich Felker
What about a mix? Have the makefile include another makefile fragment
with a rule to generate that fragment, where the fragment is generated
from comments in the source files. Then you have full dependency
tracking via make, and self-contained tests.
i wrote some tests but the build system became a bit nasty
i attached the current layout with most of the test cases
removed so someone can take a look and/or propose a better
buildsystem before i do too much work in the wrong direction

each directory has separate makefile because they work
differently

functional/ and regression/ tests have the same makefile,
they set up a lot of make variables for each .c file in
the directory, the variables and rules can be overridden
by a similarly named .mk file
(this seems to be more reliable than target specific vars)

(now it builds both static and dynamic linked binaries
this can be changed)

i use the srcdir variable so it is possible to build
the binaries into a different directory (so a single
source tree can be used for glibc and musl test binaries)
i'm not sure how useful is that (one could use several
git repos as well)

another approach would be one central makefile that
collects all the sources and then you have to build
tests from the central place
(but i thought that sometimes you just want to run
a subset of the tests and that's easier with the
makefile per dir approach, another issue is dlopen
and ldso tests need the .so binary at the right path
at runtime so you cannot run the tests from arbitrary
directory)

yet another approach would be to use a simple makefile
with explicit rules without fancy gnu make tricks
but then the makefile needs to be edited whenever a
new test is added

i'm not sure what's the best way to handle common/
code in case of decentralized makefiles, now i
collected them into a separate directory that is
built into a 'libtest.a' that is linked to all
tests so you have to build common/ first

i haven't yet done proper collection of the reports
and i'll need some tool to run the test cases:
i don't know how to report the signal name or number
(portably) from sh when a test is killed by a signal
(the shell prints segfault etc to its stderr which may
be used) and i don't know how to kill the test reliably
after a timeout

i hope this makes sense
Rich Felker
2013-07-17 15:41:29 UTC
Permalink
Post by Szabolcs Nagy
Post by Rich Felker
What about a mix? Have the makefile include another makefile fragment
with a rule to generate that fragment, where the fragment is generated
from comments in the source files. Then you have full dependency
tracking via make, and self-contained tests.
i wrote some tests but the build system became a bit nasty
i attached the current layout with most of the test cases
removed so someone can take a look and/or propose a better
buildsystem before i do too much work in the wrong direction
If you really want to do multiple makefiles, what about at least
setting up the top-level makefile so it invokes them via dependencies
rather than a shell for-loop?
Post by Szabolcs Nagy
each directory has separate makefile because they work
differently
functional/ and regression/ tests have the same makefile,
they set up a lot of make variables for each .c file in
the directory, the variables and rules can be overridden
by a similarly named .mk file
(this seems to be more reliable than target specific vars)
(now it builds both static and dynamic linked binaries
this can be changed)
It's probably useful. We had plenty of bugs that only showed up one
way or the other, but it may be useful just to test the cases where we
know or expect it matters (purely in the interest of build time and
run time).
Post by Szabolcs Nagy
i use the srcdir variable so it is possible to build
the binaries into a different directory (so a single
source tree can be used for glibc and musl test binaries)
i'm not sure how useful is that (one could use several
git repos as well)
If it's easy to do, I like it. It makes it easy to try local changes
on both without committing them to a repo.
Post by Szabolcs Nagy
another approach would be one central makefile that
collects all the sources and then you have to build
tests from the central place
(but i thought that sometimes you just want to run
a subset of the tests and that's easier with the
makefile per dir approach,
I would like this better, but I'm happy to have whatever works. IMO
it's not too bad to support building subsets with a single makefile.
You just have variables containing the names of all tests in a
particular subset and rules that depend on just those tests. One thing
I also just thought of is that you could have separate REPORT files
for each test which are concatenated to the final REPORT file. This
makes it possible to run the tests in parallel. In general, I think
the more declarative/functional and less procedural you make a
makefile, the simpler it is and the better it works.
Post by Szabolcs Nagy
another issue is dlopen
and ldso tests need the .so binary at the right path
at runtime so you cannot run the tests from arbitrary
directory)
Perhaps the makefile could pass the directory containing the test as
an argument to it for these tests so they could chdir to their own
location as part of the test?
Post by Szabolcs Nagy
yet another approach would be to use a simple makefile
with explicit rules without fancy gnu make tricks
but then the makefile needs to be edited whenever a
new test is added
I like the current approach where you don't have to edit the makefile.
:-)
Post by Szabolcs Nagy
i'm not sure what's the best way to handle common/
code in case of decentralized makefiles, now i
collected them into a separate directory that is
built into a 'libtest.a' that is linked to all
tests so you have to build common/ first
That's why I like unified makefiles.
Post by Szabolcs Nagy
i haven't yet done proper collection of the reports
i don't know how to report the signal name or number
(portably) from sh when a test is killed by a signal
Instead of using the shell, run it from your own program that gets the
exit status with waitpid and passes that to strsignal.
Post by Szabolcs Nagy
(the shell prints segfault etc to its stderr which may
be used) and i don't know how to kill the test reliably
after a timeout
i hope this makes sense
Yes. Hope this review is helpful. Again, this is your project and I'm
very grateful that you're doing it, so I don't want to impose my
opinions on how to do stuff, especially if it hinders your ability to
get things done.

Thanks and best wishes,

Rich
Daniel Cegiełka
2013-06-30 11:02:03 UTC
Permalink
Rich, you missed something:

6. Man pages for musl. We need to describe the functions and
namespaces in header files.

Daniel
Rich Felker
2013-06-30 12:13:45 UTC
Permalink
Post by Daniel Cegiełka
6. Man pages for musl. We need to describe the functions and
namespaces in header files.
This is a good topic for discussion. My documentation goal for 1.0 has
been aligned with the earlier docs outline proposal I sent to the list
a while back. Full man pages would be a much bigger task, and it's not
even something a volunteer could do without some major collaboration
with people who have a detailed understanding of every function in
musl. (Sadly, wrong man pages are probably worse than no man pages.)

What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version. Separate man pages could then be made for nonstandard
functions or functions that require significant implementation
specific documentation, possibly based on the Linux man pages project,
but with glibc-specific information just removed (for functions that
are predominantly kernel-level) or changed (where documenting musl
semantics matters).

Rich
i***@lavabit.com
2013-06-30 22:29:14 UTC
Permalink
Post by Rich Felker
Post by Daniel Cegiełka
6. Man pages for musl. We need to describe the functions and
namespaces in header files.
This is a good topic for discussion. My documentation goal for 1.0 has
been aligned with the earlier docs outline proposal I sent to the list
a while back. Full man pages would be a much bigger task, and it's not
even something a volunteer could do without some major collaboration
with people who have a detailed understanding of every function in
musl. (Sadly, wrong man pages are probably worse than no man pages.)
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
I seem to recall running across the request for this; IIRC, the
maintainer said he'd wait until TOG uploaded an nroff tarball for
POSIX2008.
Post by Rich Felker
man pages for libc functions can install them and have them match the
current version. Separate man pages could then be made for nonstandard
functions or functions that require significant implementation
specific documentation, possibly based on the Linux man pages project,
but with glibc-specific information just removed (for functions that
are predominantly kernel-level) or changed (where documenting musl
semantics matters).
Rich
The linux man pages project may focus on glibc, but they document
differences in other Linux libc versions; it might make sense to try
sending them patches that document musl differences.
Besides avoiding a new project, this would be more convenient for those
targetting multiple systems (eventually, a developer on Debian or RHEL
would see it while reading the man page...).
Of course, this doesn't cover the man pages for BSD-specific functions.
Rich Felker
2013-07-01 03:31:13 UTC
Permalink
Post by i***@lavabit.com
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
I seem to recall running across the request for this; IIRC, the
maintainer said he'd wait until TOG uploaded an nroff tarball for
POSIX2008.
This sounds unlikely. My understanding was that the people who
released the "3p" man pages generated them from the published POSIX
spec (Issue 6) with the blessing of the Open Group to license them
acceptably for distribution. I don't think the Open Group gave them
nroff files though...
Post by i***@lavabit.com
The linux man pages project may focus on glibc, but they document
differences in other Linux libc versions; it might make sense to try
sending them patches that document musl differences.
Besides avoiding a new project, this would be more convenient for those
targetting multiple systems (eventually, a developer on Debian or RHEL
would see it while reading the man page...).
I agree this would be ideal, but I think it would require convincing
them that musl is noteworthy for inclusion in the man pages (where,
for example, uClibc, dietlibc, klibc, etc. do not seem to be
considered noteworthy). I'm not sure if we would face political
obstacles here, but we could certainly try.
Post by i***@lavabit.com
Of course, this doesn't cover the man pages for BSD-specific functions.
I think there are only a very few BSD functions in musl which are not
also in glibc. Writing man pages for these, or copying and editing the
BSD ones, would not be a huge task.

Rich
Isaac
2013-07-01 17:42:45 UTC
Permalink
Post by Rich Felker
Post by i***@lavabit.com
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
I seem to recall running across the request for this; IIRC, the
maintainer said he'd wait until TOG uploaded an nroff tarball for
POSIX2008.
This sounds unlikely. My understanding was that the people who
released the "3p" man pages generated them from the published POSIX
spec (Issue 6) with the blessing of the Open Group to license them
acceptably for distribution. I don't think the Open Group gave them
nroff files though...
OK, it was the Debian maintainer, and I'm not sure exactly
what he meant:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=622918

I think it should be simple to convert the text to a man page once it's
in plain text format, and plan to write a shell script to do that for
my own use shortly; I could provide the script to others, though I'm
not sure about distributing the output myself.

Isaac Dunham
Alex Caudill
2013-07-01 17:46:53 UTC
Permalink
I don't have a ton of free time but I'd be happy to help manage wiki
content and maybe help out a bit with build system testing.

If anyone wants to delegate some tasks in these areas, feel free to mail me
directly.

Thanks!
Post by Isaac
Post by Rich Felker
Post by i***@lavabit.com
Post by Rich Felker
What might be better for the near future is to get the POSIX man
pages
Post by Rich Felker
Post by i***@lavabit.com
Post by Rich Felker
project updated to match POSIX-2008+TC1 so that users of musl who
want
Post by Rich Felker
Post by i***@lavabit.com
I seem to recall running across the request for this; IIRC, the
maintainer said he'd wait until TOG uploaded an nroff tarball for
POSIX2008.
This sounds unlikely. My understanding was that the people who
released the "3p" man pages generated them from the published POSIX
spec (Issue 6) with the blessing of the Open Group to license them
acceptably for distribution. I don't think the Open Group gave them
nroff files though...
OK, it was the Debian maintainer, and I'm not sure exactly
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=622918
I think it should be simple to convert the text to a man page once it's
in plain text format, and plan to write a shell script to do that for
my own use shortly; I could provide the script to others, though I'm
not sure about distributing the output myself.
Isaac Dunham
Isaac
2013-07-01 21:12:49 UTC
Permalink
Post by Isaac
I think it should be simple to convert the text to a man page once it's
in plain text format, and plan to write a shell script to do that for
my own use shortly; I could provide the script to others, though I'm
not sure about distributing the output myself.
I have a shell script using mksh that does part of the work.
Currently, the useage would be like this:

for m in *.html
do
lynx -dump $m | posix2nroff $m > `basename $m .html`.3posix
done
(I intend to change it to call lynx and add the extension, plus
symlink where the HTML pages do so.)

IF YOU CHANGE THE SHELL, YOU WILL NEED TO DEBUG IT!
Quotes and escaping can vary subtly in ways that break the output (I know
because I tested...); additionally, bash resulted in several places where
the output of printf was something like "N^HNA^HAM^HME^HE" (not sure why).


HTH,
Isaac Dunham
Rob Landley
2013-07-01 03:13:23 UTC
Permalink
Post by Rich Felker
Post by Daniel Cegiełka
6. Man pages for musl. We need to describe the functions and
namespaces in header files.
This is a good topic for discussion. My documentation goal for 1.0 has
been aligned with the earlier docs outline proposal I sent to the list
a while back. Full man pages would be a much bigger task, and it's not
even something a volunteer could do without some major collaboration
with people who have a detailed understanding of every function in
musl. (Sadly, wrong man pages are probably worse than no man pages.)
Michael Kerrisk does man pages. The best thing to do is feed him
information about musl-specific stuff. He can probably do some kind of
inline notation in his (docbook?) masters to make musl versions and
glibc versions.

Reinventing this wheel would suck.
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version.
I note that the guy who did the posix man pages ten years ago was:
Michael Kerrisk.

(Honestly, posix seems to be slipping into some kind of dotage. One if
its driving forces these days is Jorg Schilling. Let that sink in for a
bit.)
Post by Rich Felker
Separate man pages could then be made for nonstandard
functions or functions that require significant implementation
specific documentation, possibly based on the Linux man pages project,
but with glibc-specific information just removed (for functions that
are predominantly kernel-level) or changed (where documenting musl
semantics matters).
Interface with the linux man pages project. They don't have strong
glibc loyalty, they're just trying to document what people actually use.

Rob
Rich Felker
2013-07-01 03:43:17 UTC
Permalink
Post by Rob Landley
Post by Rich Felker
Post by Daniel Cegiełka
6. Man pages for musl. We need to describe the functions and
namespaces in header files.
This is a good topic for discussion. My documentation goal for 1.0 has
been aligned with the earlier docs outline proposal I sent to the list
a while back. Full man pages would be a much bigger task, and it's not
even something a volunteer could do without some major collaboration
with people who have a detailed understanding of every function in
musl. (Sadly, wrong man pages are probably worse than no man pages.)
Michael Kerrisk does man pages. The best thing to do is feed him
information about musl-specific stuff. He can probably do some kind
of inline notation in his (docbook?) masters to make musl versions
and glibc versions.
Reinventing this wheel would suck.
Well for the standard functions, I really like the 3p versions better
than the "Linux" versions. The Linux man pages tend to have a lot of
historical cruft -- things like recommending the wrong headers to get
the function, the wrong error codes for certain conditions, etc. --
and I think auditing them for agreement with POSIX and with musl would
be a fairly major task in itself.
Post by Rob Landley
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version.
Michael Kerrisk.
Maybe someone should contact him about all the stuff we're discussing.
Post by Rob Landley
(Honestly, posix seems to be slipping into some kind of dotage. One
Overall POSIX is going way up in quality, adopting extensions that are
widespread and very useful, and fixing issues that make it impossible
to write correct programs or libraries using certain features. I don't
agree with every single decision made, but that's the way the world
works.
Post by Rob Landley
if its driving forces these days is Jorg Schilling. Let that sink in
for a bit.)
I haven't seen any abuse by Schilly of his role in the standards
process. The behavior I would call abuse of power (mainly, the way the
C locale issue was treated with knee-jerk reacionary attitudes and
shut-down of rational discussion) has come from others but not him.
I'm not a fan of his fandom of Solaris, be he's not even been pushing
a Solaris agenda as far as I can tell.

Anyway, saying to beware of POSIX because Schilly likes POSIX is like
saying not to eat donuts because Schilly likes donuts...
Post by Rob Landley
Post by Rich Felker
Separate man pages could then be made for nonstandard
functions or functions that require significant implementation
specific documentation, possibly based on the Linux man pages project,
but with glibc-specific information just removed (for functions that
are predominantly kernel-level) or changed (where documenting musl
semantics matters).
Interface with the linux man pages project. They don't have strong
glibc loyalty, they're just trying to document what people actually use.
Yes, I think the hardest part will be convincing them that people use
musl, at least one the scale that makes it worth noting and including
in man pages that are installed on every single Linux system. But
hopefully my concern ends up being unfounded. :-)

Rich
Nathan McSween
2013-06-30 15:25:38 UTC
Permalink
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
This could be found using something like llvms Klee or any other smt bug
finder. IMO conformance tests should be written with the project otherwise
what is currently happening with musl (conformance tests don't exist or
have bugs) happens.
Luca Barbato
2013-06-30 22:59:01 UTC
Permalink
Post by Nathan McSween
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
This could be found using something like llvms Klee or any other smt bug
finder. IMO conformance tests should be written with the project otherwise
what is currently happening with musl (conformance tests don't exist or
have bugs) happens.
I suggest sparse[1], it is less known but decent.

[1]https://sparse.wiki.kernel.org/index.php/Main_Page
LM
2013-07-01 11:42:27 UTC
Permalink
Post by Rich Felker
1. Put together a list of relevant conferences one or more of us could
attend in the next 4-6 months. I'm in the US and travelling outside
the country would probably be prohibitive unless we also find
funding, but anything in the US is pretty easy for me to get to,
and other people involved in the project could perhaps attend
other conferences outside the US.
Software Freedom Day is coming up. If list members could mention it at
their local Software Freedom Day events that might help get the word out.
When I mentioned musl on my local Linux Users Group's mailing list, the
reaction I got was "wasn't that something that was used with embedded
systems". Might help to let people know about some of the Linux
distributions out there that can be run with musl. Ubuntu gave out free
DVDs of their distribution for various Software Freedom Day events. Maybe
one of the musl based distributions could come up with an ISO for Software
Freedom Day and let the people involved with the event know about it.

Also think it would be useful to make sure musl is listed in the various
Open Source software listing web sites like http://alternativeto.net/ and
http://ostatic.com.

Sincerely,
Laura
Felix Janda
2013-07-01 17:44:12 UTC
Permalink
Hello,
Post by Rich Felker
Post by Rob Landley
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version.
Michael Kerrisk.
Maybe someone should contact him about all the stuff we're discussing.
I asked him some time ago about the posix man pages and offered help with the
conversion.

He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.

(He also told me that the conversion for the previous posix man pages were done
before he had become their maintainer.)

I CC'ed him so that he knows about this discussion.


Felix
Michael Kerrisk (man-pages)
2013-07-02 01:08:52 UTC
Permalink
Gidday,
Post by Felix Janda
Hello,
Post by Rich Felker
Post by Rob Landley
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version.
Michael Kerrisk.
(To be more precise, it was Andries Brouwer, the previous maintainer,
who carried out the task shortly before I took over.)
Post by Felix Janda
Post by Rich Felker
Maybe someone should contact him about all the stuff we're discussing.
I asked him some time ago about the posix man pages and offered help with the
conversion.
He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
Post by Felix Janda
(He also told me that the conversion for the previous posix man pages were done
before he had become their maintainer.)
I CC'ed him so that he knows about this discussion.
Thank you, Felix.

Cheers,

Michael


--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/
Isaac
2013-07-06 21:52:27 UTC
Permalink
Hello,
Post by Michael Kerrisk (man-pages)
Gidday,
Post by Felix Janda
Hello,
Post by Rich Felker
Maybe someone should contact him about all the stuff we're discussing.
I asked him some time ago about the posix man pages and offered help with the
conversion.
He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
On a slightly related note, would you be interested in patches for the
Linux manpages briefly documenting places where musl differs from glibc
(in the NOTES section, along the same lines as the notes about libc4/libc5)?
Post by Michael Kerrisk (man-pages)
Cheers,
Michael
Thanks,
Isaac Dunham
Michael Kerrisk (man-pages)
2013-07-06 22:12:04 UTC
Permalink
Hello Isaac,
Post by Felix Janda
Hello,
Post by Michael Kerrisk (man-pages)
Gidday,
Post by Felix Janda
Hello,
Post by Rich Felker
Maybe someone should contact him about all the stuff we're discussing.
I asked him some time ago about the posix man pages and offered help with the
conversion.
He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
On a slightly related note, would you be interested in patches for the
Linux manpages briefly documenting places where musl differs from glibc
(in the NOTES section, along the same lines as the notes about libc4/libc5)?
Historically, man-pages has primarily documented glibc + syscalls, but
there's nothing firm about that. It's more been about limited time
resources and the fact that glibc is the most widely used libc. I'd
have no objection to musl-specific notes in the man-pages. Perhaps a
patch to libc(7) would be a good place to start.

Cheers,

Michael


--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/
Justin Cormack
2013-07-06 23:04:04 UTC
Permalink
Post by Michael Kerrisk (man-pages)
Hello Isaac,
Post by Felix Janda
Hello,
Post by Michael Kerrisk (man-pages)
Gidday,
Post by Felix Janda
Hello,
Post by Rich Felker
Maybe someone should contact him about all the stuff we're discussing.
I asked him some time ago about the posix man pages and offered help with the
conversion.
He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
On a slightly related note, would you be interested in patches for the
Linux manpages briefly documenting places where musl differs from glibc
(in the NOTES section, along the same lines as the notes about libc4/libc5)?
Historically, man-pages has primarily documented glibc + syscalls, but
there's nothing firm about that. It's more been about limited time
resources and the fact that glibc is the most widely used libc. I'd
have no objection to musl-specific notes in the man-pages. Perhaps a
patch to libc(7) would be a good place to start.
The man(2) section is rather glibc specific and makes the syscall details
rather subsidiary. I will try to send some patches if these would be
welcome.

Justin
Post by Michael Kerrisk (man-pages)
Cheers,
Michael
--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/
Rich Felker
2013-07-07 00:03:24 UTC
Permalink
Post by Isaac
Post by Michael Kerrisk (man-pages)
Post by Isaac
On a slightly related note, would you be interested in patches for the
Linux manpages briefly documenting places where musl differs from glibc
(in the NOTES section, along the same lines as the notes about
libc4/libc5)?
Post by Michael Kerrisk (man-pages)
Historically, man-pages has primarily documented glibc + syscalls, but
there's nothing firm about that. It's more been about limited time
resources and the fact that glibc is the most widely used libc. I'd
have no objection to musl-specific notes in the man-pages. Perhaps a
patch to libc(7) would be a good place to start.
I'm not sure how much effort would be involved. My ideal outcome would
be for the man pages to evolve to document what applications can
_portably_ expect from the interfaces, with appropriate notes on
caveats where certain libc versions or kernel versions give you
less-than-conforming behavior, and where nonstandard extensions are
available. However my feeling is that this would be a very big project
and I'm not sure if Michael would want to go in that direction. I do
think it would greatly improve the quality of Linux software
development, though.
Post by Isaac
The man(2) section is rather glibc specific and makes the syscall details
rather subsidiary. I will try to send some patches if these would be
welcome.
I think it's an error to have anything glibc-specific in section 2 of
the manual, which should be documenting the kernel, not userspace.
What would be useful in the section 2 man pages is to document where
the syscall is insufficient to provide POSIX semantics, which are left
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.

Rich
Michael Kerrisk
2013-07-09 00:18:21 UTC
Permalink
Rich,
Post by Rich Felker
Post by Isaac
Post by Michael Kerrisk (man-pages)
Post by Isaac
On a slightly related note, would you be interested in patches for the
Linux manpages briefly documenting places where musl differs from glibc
(in the NOTES section, along the same lines as the notes about
libc4/libc5)?
Post by Michael Kerrisk (man-pages)
Historically, man-pages has primarily documented glibc + syscalls, but
there's nothing firm about that. It's more been about limited time
resources and the fact that glibc is the most widely used libc. I'd
have no objection to musl-specific notes in the man-pages. Perhaps a
patch to libc(7) would be a good place to start.
I'm not sure how much effort would be involved. My ideal outcome would
be for the man pages to evolve to document what applications can
_portably_ expect from the interfaces,
This is what the man pages endeavor to do. (I consider cases where
non-portable behavior is not clearly indicated to be bugs.)
Post by Rich Felker
with appropriate notes on
caveats where certain libc versions or kernel versions give you
less-than-conforming behavior, and where nonstandard extensions are
available.
That's more or less what the pages do, with the proviso that "certain
libc versions" currently means just glibc, and in a few odd cases,
ancient Linux libc.
Post by Rich Felker
However my feeling is that this would be a very big project
and I'm not sure if Michael would want to go in that direction. I do
think it would greatly improve the quality of Linux software
development, though.
Post by Isaac
The man(2) section is rather glibc specific and makes the syscall details
rather subsidiary. I will try to send some patches if these would be
welcome.
I think it's an error to have anything glibc-specific in section 2 of
the manual, which should be documenting the kernel, not userspace.
What would be useful in the section 2 man pages is to document where
("useful" to who? Few users care about the naked
syscall behavior.)
Post by Rich Felker
the syscall is insufficient to provide POSIX semantics, which are left
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.
See https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3. Users want to get the documentation in one place.
Note that the approach in man-pages (consolidating info on the syscall
plus any libc additions in one page) is not unique to Linux. From some
(offlist) discussions with the BSD man pages maintainers, it appears
that at least some (all?) of the BSDs do the same.

Cheers,

Michael
Kurt H Maier
2013-07-09 02:36:41 UTC
Permalink
Post by Michael Kerrisk
("useful" to who? Few users care about the naked
syscall behavior.)
If they don't care about syscall behavior, what are they doing reading
section 2?

khm
Rich Felker
2013-07-09 02:53:30 UTC
Permalink
Post by Michael Kerrisk
Post by Rich Felker
However my feeling is that this would be a very big project
and I'm not sure if Michael would want to go in that direction. I do
think it would greatly improve the quality of Linux software
development, though.
Post by Justin Cormack
The man(2) section is rather glibc specific and makes the syscall details
rather subsidiary. I will try to send some patches if these would be
welcome.
I think it's an error to have anything glibc-specific in section 2 of
the manual, which should be documenting the kernel, not userspace.
What would be useful in the section 2 man pages is to document where
("useful" to who? Few users care about the naked
syscall behavior.)
Admittedly, part of the answer is "to me". However I can think of a
good number of others:

1. Anyone doing pure asm programming on Linux. I think this is a
rather bad idea, but there are people who do it.

2. People reading strace output. (For instance, if the kernel returns
a bogus error code and userspace has to translate it, that's
relevant to someone who sees the strace output and errno value in
their program mismatching.)

3. Implementors of any component that uses or provides the syscall.
That includes not only libc, but also qemu app-level emulation, BSD
Linux-syscall ABI emulation, Zvi's psxcalls layer (intended to
eventually allow using musl as the first-ever conforming Windows
libc that's actually deployable, unlike cygwin), ...

4. Anyone trying to understand what libc (musl, glibc, or otherwise)
is doing munging the syscall inputs/results.

5. Kernel developers who want to know the actual contract their
interfaces are supposed to satisfy and preserve.

I suspect there are others, but those are the ones that came to mind
right off.
Post by Michael Kerrisk
Post by Rich Felker
the syscall is insufficient to provide POSIX semantics, which are left
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.
See https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3.
Yes, that's understandable. I somewhat question why we even still have
a "section 2" in the manual, though...

Rich
Michael Kerrisk (man-pages)
2013-07-09 05:28:17 UTC
Permalink
Rich,
Post by Rich Felker
Post by Michael Kerrisk
Post by Rich Felker
However my feeling is that this would be a very big project
and I'm not sure if Michael would want to go in that direction. I do
think it would greatly improve the quality of Linux software
development, though.
Post by Justin Cormack
The man(2) section is rather glibc specific and makes the syscall details
rather subsidiary. I will try to send some patches if these would be
welcome.
I think it's an error to have anything glibc-specific in section 2 of
the manual, which should be documenting the kernel, not userspace.
What would be useful in the section 2 man pages is to document where
("useful" to who? Few users care about the naked
syscall behavior.)
Admittedly, part of the answer is "to me". However I can think of a
1. Anyone doing pure asm programming on Linux. I think this is a
rather bad idea, but there are people who do it.
2. People reading strace output. (For instance, if the kernel returns
a bogus error code and userspace has to translate it, that's
relevant to someone who sees the strace output and errno value in
their program mismatching.)
3. Implementors of any component that uses or provides the syscall.
That includes not only libc, but also qemu app-level emulation, BSD
Linux-syscall ABI emulation, Zvi's psxcalls layer (intended to
eventually allow using musl as the first-ever conforming Windows
libc that's actually deployable, unlike cygwin), ...
4. Anyone trying to understand what libc (musl, glibc, or otherwise)
is doing munging the syscall inputs/results.
5. Kernel developers who want to know the actual contract their
interfaces are supposed to satisfy and preserve.
I suspect there are others, but those are the ones that came to mind
right off.
That's a good, comprehensive list of valid, important users of course.
What I was meaning to say was that that set of users is a small subset
of the total users of the man pages. But, in any case, the goal is
also to satisfy those users by including notes about the bare syscalls
inside the pages.
Post by Rich Felker
Post by Michael Kerrisk
Post by Rich Felker
the syscall is insufficient to provide POSIX semantics, which are left
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.
See https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3.
Yes, that's understandable. I somewhat question why we even still have
a "section 2" in the manual, though...
Well then, you'll be amused to hear that the discussion with the BSD
maintainers was about whether FreeBSD (and others) should simply merge
Sections 2 and 3. I can see arguments in favor of it, but they're not
(to my mind) compelling. See one piece from the thread below.

Cheers,

Michael




---------- Forwarded message ----------
From: Michael Kerrisk <***@gmail.com>
Date: Mon, Jun 10, 2013 at 11:14 AM
Subject: Re: Merging man page sections 2 and 3?
To: Matthew Dempsky <***@dempsky.org>
Cc: ***@mckusick.com, ***@netbsd.org, Michael Kerrisk-manpages
<***@gmail.com>, ***@openbsd.org, Jason McIntyre
<***@openbsd.org>, Philip Guenther <***@openbsd.org>


Hello Matthew,

Thanks for including the Linux man-pages project in this discussion. I
appreciate that.
Post by Rich Felker
Hello fellow OS man page maintainers!
Within OpenBSD, I started a discussion recently about sections 2 and
3, and whether it makes sense to continue keeping them separate.
(Do you have a pointer to the archive of that discussion?)
Post by Rich Felker
The
biggest concern raised against merging them is being gratuitously
different from other OSes and how it would affect Xr entries in
third-party manuals, so I was curious in knowing how other OSes feel
about this.
My view is the distinction between "system call" and "library call" is
very blurred nowadays,
I can certainly find some reasons to agree with you on that. I'm
assuming that the BSDs will be similar to Linux, where there are
various fuzzy cases. for example, we have system calls that have very
simple wrappers, "system calls" (e.g., getpid() where glibc actually
caches the result of the call, bypassing the system call itself in
most cases--yes, it is a sad, stupid idea, but so it is), system
calls that have rather thicker wrappers in libc, and multiplexed
syscalls (e.g., on x86-32, one system call, socketcall() provides the
functionality of all the BSD sockets APIs--this came about because of
the history of how the APIs were added to the kernel, long ago). And,
as you noted in response to Kirk, there are cases where library functions
are more expensive than syscalls.
Post by Rich Felker
and it doesn't make sense to continue putting
them in different sections. Certainly there's merit to documenting
the userland<->kernel interface somehow (e.g., because it's exposed
via syscall() and strace/ktrace and other tools), but I think that
could be better achieved other ways. E.g., Linux man-pages has
syscalls.2 that lists all of the system calls and when they were
added, and some of the man pages like stat.2 have implementation notes
describing how glibc implements it using one of a few different actual
system calls depending on the kernel.
Do any of your respective projects have clear and unambiguous
guidelines for which section a function should be documented in? Do
you think your users benefit from having function documentation split
across two sections?
The decision for this is somewhat ad hoc on Linux. If there's
an underyling kernel interface, the documentation tends to end
up in a .2 page. But the line is fuzzy. The POSIX message queue
pages, for example, are in Section 3--even though they in part
document underlying system calls--because there are significant
parts that are in libc. On the other hand, Section 2 pages often
include documentation not just of the raw syscall interface, but
the pieces that glibc API adds on top. For example, the select.2
page includes this text:

Linux Notes
The Linux pselect() system call modifies its timeout argument.
However, the glibc wrapper function hides this behavior by
using a local variable for the timeout argument that is passed
to the system call. Thus, the glibc pselect() function does
not modify its timeout argument; this is the behavior required
by POSIX.1-2001.

There are many other cases similar to that one. The following FAQ
addresses a point tangentially related to this:
https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
Post by Rich Felker
Assuming not, how feasible would it be for your project to move its
manual pages from section 2 into section 3? Are there reasons you
could not or would not make that change?
Even if I thought it was a good idea, the biggest impediment would simply
be the effort (for me) and churn (for third part xrefs) that such a
change would cause. Without looking too closely, it's not clear to
me that the change would be just be some simple scripting.
Post by Rich Felker
Certainly there's a significant initial effort to reorganize things,
but I expect that to be a one-time cost largely solvable by sed.
Also, third party manual pages might have inaccurate Xr entries
initially, but overtime I expect upstream projects to adapt and we'll
end up with more accurate cross-references everywhere. (And in the
interim, it's possible to workaround this by having "man 2 foo" also
search section 3 or something.)
For what it's worth, I did a quick non-scientific scan of the ~2000
packages installed on my OpenBSD machine, and found only ~1% of the
man pages contained cross-references to section 2 (which also included
hilarious mistakes like "Xr strcat 2"). I'd be very interested if
others tried to reproduce this experiment on different package sets
and got differing results.
Anyway, if you haven't had a knee-jerk "but that's how we've always
done it!" reaction, I'm very interested in your thoughts on the
matter. :)
So, I don't have a knee-jerk reaction ;-). I agree that the current
situation (at least on Linux, and I guess elsewhere) is imperfect.
Things are not nearly so clear cut as the .2/.3 distinction implies.
That said, I'm inclined to leave things be, for several reasons:

* Users expect the distinction. (This is not a compelling
argument, I agree.)
* Making the change requires time and effort, and my time is
sadly limited.
* The syscall / lib function distinction does (as you note) need to
be made clear somehow. The current set up does that moderately
(but not perfectly) well on Linux.
* I see no compelling reason to make the change you suggest (you
didn't really make the case at the start of your mail).

Best regards,

Michael
Rob Landley
2013-07-10 19:39:34 UTC
Permalink
Post by Rich Felker
Post by Rich Felker
Post by Michael Kerrisk
Post by Rich Felker
the syscall is insufficient to provide POSIX semantics, which
are left
Post by Rich Felker
Post by Michael Kerrisk
Post by Rich Felker
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.
See
https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
Post by Rich Felker
Post by Michael Kerrisk
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3.
I note that I'm nominally the kernel Documentation maintainer. If you'd
like a Documentation/syscall directory handed over to you in
MAINTAINERS, I can do that. (Or Documentation/DocBook/syscall, up to
you...)

(I don't do nearly enough with it due to lack of time, and because
every patch series in the world has a documentation bit I get cc'd on
and How am _I_ supposed to judge the correct locking requirements for a
Heterodyne Death Ray so half the time I just go "You need a comma in
'Fools I shall destroy you all!'" and then ack it. Still eats up the
time I have to devote to that topic, most weeks.)

At some point I'd like to completely reorganize that directory so (for
example) all the architecture directories are under "arch". But this
involves me setting up a git tree somewhere I can upload to and send
pull requests about, and that's just icky enough to stay well below the
surface of my todo list...
Post by Rich Felker
Post by Rich Felker
Yes, that's understandable. I somewhat question why we even still
have
Post by Rich Felker
a "section 2" in the manual, though...
Well then, you'll be amused to hear that the discussion with the BSD
maintainers was about whether FreeBSD (and others) should simply merge
Sections 2 and 3. I can see arguments in favor of it, but they're not
(to my mind) compelling. See one piece from the thread below.
A system call is a different thing than a library call, even in libc.
The fact glibc gets them confused is a problem with glibc.

In theory there is a "clean upstream" system call set in posix, and a
"clean upstream" libc call set in c99 and/or posix. (In practice
there's noting like subscribing to the austin group mailing list to
rapidly erode your faith in the upcoming Posix standard. The sausage is
made of people! And they're _INSANE_.)

Rob
Rob Landley
2013-07-09 16:42:49 UTC
Permalink
Post by Michael Kerrisk
Rich,
Post by Rich Felker
I think it's an error to have anything glibc-specific in section 2
of
Post by Rich Felker
the manual, which should be documenting the kernel, not userspace.
What would be useful in the section 2 man pages is to document where
("useful" to who? Few users care about the naked
syscall behavior.)
We exist. :)

Speaking of which the data blob sent to sched_{get,set}affinity() is an
array of longs, with each processor's bit living at:

int x = 255 & (mask[cpu/sizeof(long)] >> (8*(cpu&(sizeof(long)-1))));

I know this because I implemented taskset against the raw system call,
and read kernel code until I found the obscure corner where this is
actually documented, namely:

arch/powerpc/include/asm/bitops.h

(And yes, _only_ described in that architecture, not in any of the
others. Go figure.)

(The toybox project is not a GNU program, and does not #define
GNU_DAMMIT to access extra magic header bits. Where necessary, it
provides its own header definitions.)
Post by Michael Kerrisk
Post by Rich Felker
the syscall is insufficient to provide POSIX semantics, which are
left
Post by Rich Felker
to userspace to provide. Such section 2 pages could then have
corresponding section 3 pages that document the library behavior.
See
https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3. Users want to get the documentation in one place.
Note that the approach in man-pages (consolidating info on the syscall
plus any libc additions in one page) is not unique to Linux. From some
(offlist) discussions with the BSD man pages maintainers, it appears
that at least some (all?) of the BSDs do the same.
Document what the syscall does, and then have wrapper behavior listed
in the "deviant glibc-specific perversions" section?

Syscall wrappers in Section 2 make sense, it _is_ a syscall, and most
wrappers should be NOPs. The objection is not documenting what the
actual syscall does (when you can call it via syscall(), or get the raw
behavior when using klibc).

Rob
Rich Felker
2013-07-09 16:50:41 UTC
Permalink
Post by Rob Landley
Post by Michael Kerrisk
See https://www.kernel.org/doc/man-pages/todo.html#migrate_to_kernel_source
I think it would be a retrograde step to split syscall pages into
Sections 2 and 3. Users want to get the documentation in one place.
Note that the approach in man-pages (consolidating info on the syscall
plus any libc additions in one page) is not unique to Linux. From some
(offlist) discussions with the BSD man pages maintainers, it appears
that at least some (all?) of the BSDs do the same.
Document what the syscall does, and then have wrapper behavior
listed in the "deviant glibc-specific perversions" section?
Usually the difference between sections 2 and 3 is not "deviant
glibc-specific perversions" but "workarounds for kernel bugs that
won't be fixed because of the policy of maintaining stable syscall
API/ABI".
Post by Rob Landley
Syscall wrappers in Section 2 make sense, it _is_ a syscall, and
most wrappers should be NOPs.
None of the ones that are cancellation points can be pure wrappers. If
nothing else, they must handle cancellation. That covers a big chunk
already.
Post by Rob Landley
The objection is not documenting what
the actual syscall does (when you can call it via syscall(), or get
the raw behavior when using klibc).
I agree it would be useful to have this information, but with limited
resources and the issue of confusing developers who get section 2 by
default then have to look again in section 3 for the man page they
actually wanted, I can see where it's probably preferable to maintain
the status quo, leaving the 2/3 split based on the historical
expectation of whether the function was a "syscall" or "library
function", and documenting Linux syscall deviations from the public
interface as part of the combined man page.

Rich
Isaac
2013-07-26 19:20:25 UTC
Permalink
Post by Michael Kerrisk (man-pages)
Gidday,
Post by Felix Janda
Hello,
Post by Rob Landley
Post by Rich Felker
What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version.
Michael Kerrisk.
(To be more precise, it was Andries Brouwer, the previous maintainer,
who carried out the task shortly before I took over.)
<snip>
Post by Michael Kerrisk (man-pages)
Post by Felix Janda
He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
I'm curious what the status of this is. Do you have the sources yet?

(BTW, if you haven't got them yet, I noticed these:
http://austingroupbugs.net/view.php?id=694
http://austingroupbugs.net/view.php?id=715
and a few others that refer to use of groff with the mm macros.)
Post by Michael Kerrisk (man-pages)
--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/
John Spencer
2013-09-06 12:23:40 UTC
Permalink
Post by Isaac
Post by Michael Kerrisk (man-pages)
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
I'm curious what the status of this is. Do you have the sources yet?
i am very interested in a status update as well.
getting the posix 2008 manpages into my distro is a todo item since a
long time
https://github.com/rofl0r/sabotage/issues/34

regards,
--JS
Michael Kerrisk (man-pages)
2013-09-08 06:05:24 UTC
Permalink
Post by John Spencer
Post by Isaac
Post by Michael Kerrisk (man-pages)
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
I'm curious what the status of this is. Do you have the sources yet?
i am very interested in a status update as well.
getting the posix 2008 manpages into my distro is a todo item since a long
time
https://github.com/rofl0r/sabotage/issues/34
A while back we got permission from the IEEE and The Open Group to use
the POSIX.1-2013 pages (==POSIX.1-2008 + Technical Corrigendum 1,
published 2013).

However, the source text that has been provided to us needs massaging
in a number of ways before it can be published as a set of standalone
pages. Felix Janda kindly offered to take on that task, and has been
making very good progress. I expect that in a week or two, we'll have
a set of pages for public review (to see if there are any remaining
bugs in the "massaging process" that Felix or I did not spot).

Cheers,

Michael
--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/
John Spencer
2013-09-09 04:44:37 UTC
Permalink
Post by Michael Kerrisk (man-pages)
Post by John Spencer
Post by Isaac
Post by Michael Kerrisk (man-pages)
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
I'm curious what the status of this is. Do you have the sources yet?
i am very interested in a status update as well.
getting the posix 2008 manpages into my distro is a todo item since a long
time
https://github.com/rofl0r/sabotage/issues/34
A while back we got permission from the IEEE and The Open Group to use
the POSIX.1-2013 pages (==POSIX.1-2008 + Technical Corrigendum 1,
published 2013).
However, the source text that has been provided to us needs massaging
in a number of ways before it can be published as a set of standalone
pages. Felix Janda kindly offered to take on that task, and has been
making very good progress. I expect that in a week or two, we'll have
a set of pages for public review (to see if there are any remaining
bugs in the "massaging process" that Felix or I did not spot).
that's very good news.
btw, i am currently writing (or improving) a non-bloated man
implementation: https://raw.github.com/rofl0r/hardcore-utils/master/man.c .
in the process i noted that the posix manpages from 2003 make use of
some groff features like the tbl preprocessor (a hideous hack to display
html-like tables in a completely unidiomatic way), for example in man 1p
printf.
those features are not supported by any traditional *roff
implementation, so i wonder if the next iteration of the posix manpages
could stick to the basic nroff function set to improve compatibility...


regards,
--JS
Anthony J. Bentley
2013-09-09 05:29:10 UTC
Permalink
Post by John Spencer
Post by Michael Kerrisk (man-pages)
Post by John Spencer
Post by Isaac
Post by Michael Kerrisk (man-pages)
As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.
I'm curious what the status of this is. Do you have the sources yet?
i am very interested in a status update as well.
getting the posix 2008 manpages into my distro is a todo item since a long
time
https://github.com/rofl0r/sabotage/issues/34
A while back we got permission from the IEEE and The Open Group to use
the POSIX.1-2013 pages (==POSIX.1-2008 + Technical Corrigendum 1,
published 2013).
However, the source text that has been provided to us needs massaging
in a number of ways before it can be published as a set of standalone
pages. Felix Janda kindly offered to take on that task, and has been
making very good progress. I expect that in a week or two, we'll have
a set of pages for public review (to see if there are any remaining
bugs in the "massaging process" that Felix or I did not spot).
that's very good news.
btw, i am currently writing (or improving) a non-bloated man
implementation: https://raw.github.com/rofl0r/hardcore-utils/master/man.c .
Along those lines is mandoc, which is not a full troff implementation but
does support tbl, man, and mdoc. Used as the default man in several BSDs
and Minix. Also supports output to html (actually its original purpose).

http://mdocml.bsd.lv/
--
Anthony J. Bentley
Daniel Cegiełka
2013-09-09 05:40:49 UTC
Permalink
Post by Anthony J. Bentley
Along those lines is mandoc, which is not a full troff implementation but
does support tbl, man, and mdoc. Used as the default man in several BSDs
and Minix. Also supports output to html (actually its original purpose).
http://mdocml.bsd.lv/
I use mandoc with musl and works great (very elegant solution and
worth recommendation).

Daniel
Rob Landley
2013-09-19 02:58:32 UTC
Permalink
Post by Anthony J. Bentley
Post by Anthony J. Bentley
Along those lines is mandoc, which is not a full troff
implementation but
Post by Anthony J. Bentley
does support tbl, man, and mdoc. Used as the default man in several
BSDs
Post by Anthony J. Bentley
and Minix. Also supports output to html (actually its original
purpose).
Post by Anthony J. Bentley
http://mdocml.bsd.lv/
I use mandoc with musl and works great (very elegant solution and
worth recommendation).
I don't see this listed on the wiki page of other lightweight packages?

Rob
John Spencer
2013-09-19 09:54:39 UTC
Permalink
Post by Rob Landley
Post by Anthony J. Bentley
Post by Anthony J. Bentley
Along those lines is mandoc, which is not a full troff
implementation but
Post by Anthony J. Bentley
does support tbl, man, and mdoc. Used as the default man in several
BSDs
Post by Anthony J. Bentley
and Minix. Also supports output to html (actually its original
purpose).
Post by Anthony J. Bentley
http://mdocml.bsd.lv/
I use mandoc with musl and works great (very elegant solution and
worth recommendation).
I don't see this listed on the wiki page of other lightweight packages?
added.
btw, you have a wiki account as well, don't you ? (rhetoric question, no
answer expected)

Rob Landley
2013-07-04 18:05:06 UTC
Permalink
Post by Rich Felker
Hi all,
With us nearing musl 1.0, there are a lot of things I could use some
1. Put together a list of relevant conferences one or more of us could
attend in the next 4-6 months. I'm in the US and travelling outside
the country would probably be prohibitive unless we also find
funding, but anything in the US is pretty easy for me to get to,
and other people involved in the project could perhaps attend
other conferences outside the US.
CELF turned into the "Linux Foundation Embedded Linux Conference" and
they squished it together with some Android thing. I've never been to
the Plumber's conference half my twitter list seems to go.

Ohio LinuxFest's call for papers is open until Monday:

---

Sender: ***@gmail.com
Received: by 10.223.36.69 with HTTP; Thu, 27 Jun 2013 07:03:14 -0700
(PDT)
Date: Thu, 27 Jun 2013 10:03:14 -0400
Message-ID:
<CAJtFCaq7J9v3QfjCZ+tRD2-zna_Y5J5FrmJ-***@mail.gmail.com>
Subject: Oho LinuxFest Call for Talks closing soon
From: Kevin O'Brien <***@ohiolinux.org>

Hello, we are asking for a little help from you in getting the word out
that the Ohio LinuxFest Call for Talks will be closing 7/8/13. Obviously
the more proposals we get the better the program we can put together,
so we
are asking if you could help us out by posting something to your
followers
on places like Facebook, Google+, LinkedIn, Twitter, and Identi.ca (or
any
other social media you like). The page with the submission is
https://ohiolinux.org/cfp.

Thanks for any help you can give us.

--
Kevin O'Brien
Publicity Director, Ohio LinuxFest
***@ohiolinux.org
https://ohiolinux.org
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
Is the Linux Test Project relevant?

Rob
Szabolcs Nagy
2013-07-08 07:40:38 UTC
Permalink
Post by Rob Landley
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
Is the Linux Test Project relevant?
most of their tests are for kernel related features
some of them are pretty outdated (stress testing floppy io..)
and not very high quality (mostly written by ibm folks)
there is a large set of 'openhpi' tests and the entire
posix_testsuit (already audited), there is a fair amount
of network tests, mostly sctp and nfs

it seems a bit messy and not quite what we want
Rob Landley
2013-07-09 02:14:15 UTC
Permalink
Post by Rich Felker
Post by Rob Landley
Post by Rich Felker
5. Rigorous testing. My ideal vision of this role is having
somebody
Post by Rob Landley
Post by Rich Felker
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs
that
Post by Rob Landley
Post by Rich Felker
haven't yet been found. And who reads the glibc bug tracker so
we
Post by Rob Landley
Post by Rich Felker
can take advantage of their bug reports too.
Is the Linux Test Project relevant?
most of their tests are for kernel related features
some of them are pretty outdated (stress testing floppy io..)
and not very high quality (mostly written by ibm folks)
there is a large set of 'openhpi' tests and the entire
posix_testsuit (already audited), there is a fair amount
of network tests, mostly sctp and nfs
it seems a bit messy and not quite what we want
The Linux Foundation was formed by the merger of OSDL with the Linux
Standards Group to form "a voltron of bureaucracy", so I'm not
surprised their actual testing and standardization functions
essentially stopped.

The purpose of OSDL was to provide Linus Torvalds with a salary
independent of any specific company. Unfortunately, the amount of money
companies contributed to it was well above Linus's needs, they went on
to justify _having_ so much money by getting offices and hiring people,
meaning instead of a trust fund now they needed more money on a regular
basis.

So they set themselves up as "the face of Linux" for corporations the
same way AOL set itself up as the face of the internet for dialup users
in the 1990's, and promised that they could translate between suit and
geek. And they got very very good at talking to suits (where the money
comes from), and are baffled by the _existence_ of hobbyists. (People
do open source without geting paid? Inconceivable! They can't possibly
be relevant to the process, they're just hangers-on mooching off our
extensively funded work. Free riders we tolerate for historical
reasons...)

So yeah, not surprising if LSB became a corporate rubber stamp. Sad,
but not surprising.

Rob
Matthew Fernandez
2013-07-09 02:57:45 UTC
Permalink
Post by Rob Landley
So they set themselves up as "the face of Linux" for corporations the
same way AOL set itself up as the face of the internet for dialup users
in the 1990's, and promised that they could translate between suit and
geek. And they got very very good at talking to suits (where the money
comes from), and are baffled by the _existence_ of hobbyists. (People
do open source without geting paid? Inconceivable! They can't possibly
be relevant to the process, they're just hangers-on mooching off our
extensively funded work. Free riders we tolerate for historical
reasons...)
To briefly defend the Linux Foundation, some of their employees spend a great deal of their time on engaging with the hobbyist community. Jennifer Cloer and Carla Schroder spring to mind. I'm not affiliated with the Foundation, but just my two cents :)

________________________________

The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
Rich Felker
2013-07-08 14:45:59 UTC
Permalink
Post by Rob Landley
Post by Rich Felker
Hi all,
With us nearing musl 1.0, there are a lot of things I could use some
1. Put together a list of relevant conferences one or more of us could
attend in the next 4-6 months. I'm in the US and travelling outside
the country would probably be prohibitive unless we also find
funding, but anything in the US is pretty easy for me to get to,
and other people involved in the project could perhaps attend
other conferences outside the US.
CELF turned into the "Linux Foundation Embedded Linux Conference"
and they squished it together with some Android thing. I've never
been to the Plumber's conference half my twitter list seems to go.
I wish we'd noticed a few days sooner. I've been thinking about
submitting a proposal for a talk, especially since the timing is so
good (September 13-15), but I'm having trouble coming up with a good
topic.

My leaning so far is to propose something related to the state of
library quality on Linux, dealing with both libc (the issues in glibc
and uclibc that lead to musl, and how glibc has been improving since
then) and non-system libraries that play a core role on modern Linux
systems (glib, etc.) and how they impact robustness (abort!) and
portability.

The idea is that it could be mildly promotional for musl, but also a
chance to promote proper library practices, possibly my definition of
"library-safe" (if we work out a good one), etc.

Rich
Rich Felker
2013-07-09 02:54:40 UTC
Permalink
Post by Rich Felker
I wish we'd noticed a few days sooner. I've been thinking about
submitting a proposal for a talk, especially since the timing is so
good (September 13-15), but I'm having trouble coming up with a good
topic.
My leaning so far is to propose something related to the state of
library quality on Linux, dealing with both libc (the issues in glibc
and uclibc that lead to musl, and how glibc has been improving since
then) and non-system libraries that play a core role on modern Linux
systems (glib, etc.) and how they impact robustness (abort!) and
portability.
The idea is that it could be mildly promotional for musl, but also a
chance to promote proper library practices, possibly my definition of
"library-safe" (if we work out a good one), etc.
I proposed a talk along these lines today before the deadline. Hoping
it gets accepted.

Rich
Anthony G. Basile
2013-07-08 14:12:28 UTC
Permalink
Post by Rich Felker
Hi all,
With us nearing musl 1.0, there are a lot of things I could use some
1. Put together a list of relevant conferences one or more of us could
attend in the next 4-6 months. I'm in the US and travelling outside
the country would probably be prohibitive unless we also find
funding, but anything in the US is pretty easy for me to get to,
and other people involved in the project could perhaps attend
other conferences outside the US.
2. Organize patches from sabotage, musl-cross, musl pkgsrc, etc. into
suitable form for upstream, and drafting appropriate emails or bug
reports to send.
3. Check status of musl support with build systems and distros
possibly adopting it as an option or their main libc. This would
include OpenWRT, Aboriginal, crosstool-ng, buildroot, Alpine Linux,
Gentoo, etc. If their people doing the support seem to have gotten
stuck or need help, offer assistance. Make sure the wiki is kept
updated with info on other projects using musl we we can send folks
their way too.
4. Wikimastering. The wiki could use a lot of organizational
improvement, additional information, and monitoring for outdated
information that needs to be updated or removed.
5. Rigorous testing. My ideal vision of this role is having somebody
who takes a look at each bug fix committed and writes test cases
for the bug and extrapolates tests for possible related bugs that
haven't yet been found. And who reads the glibc bug tracker so we
can take advantage of their bug reports too.
Anyone up for volunteering? :-)
Rich
You can watch (and critically comment) on my progress with gentoo + musl
at the following repo:

http://git.overlays.gentoo.org/gitweb/?p=proj/hardened-dev.git;a=shortlog;h=refs/heads/musl

My goal it to get patches that can eventually be incorporated upstream
where upstream means either 1) gentoo if its a gentoo specific issue and
2) the developer/maintainer of the code.

My approach to #2 will be to patch the build system to detect what is
available and what isn't, breaking down assumptions like linux = glibc.
Currently though, I'm busy just "dirty" hacking, ie just making sure
it compiles in face of missing header, etc.

#1 is not trivial either as it requires things like a wrapper for
ldconfig which our package management system assumes is available. When
emerge gcc, libgcc_s.so.1 must be made available else breakage!

It appears our goals overlap so I guess this makes me volunteer.
--
Anthony G. Basile, Ph. D.
Chair of Information Technology
D'Youville College
Buffalo, NY 14201
(716) 829-8197
Continue reading on narkive:
Loading...