Discussion:
embedded newbies site.
(too old to reply)
Rob Landley
2013-07-16 02:03:34 UTC
Permalink
As I was ruminating on IRC:

Once upon a time the busybox/uClibc community provided a condensation
nuclei for most embedded Linux development: website, mailing list, and
#uclibc irc channel here on freenode. But that fell apart in 2005
(buildroot was one project too many, openwrt was separate and based on
the horrible linksys build system, various corporate efforts like Maemo
and Moblin and OpenMoko did their own thing...)

A lot of the stuff I learned about embedded development would be hard
to replicate today because there's no community aimed at bringing
newbies up to speed on this stuff. I know we've got a wiki page
collecting other interesting packages that work with musl, but there's
more to it than that.

My path to embedded development went like something this:

1) remove unnecessary packages from existing distros
2) copy files into an empty directory to make a working chroot that
runs my app
3) build Linux From Scratch from source
4) swap in busybox and uClibc in LFS, remove unnecessary stuff,
customize result

It would be nice to have a site walking people through these steps, and
hosting an "Embedded Linux From Scratch" as a wiki, and with associated
mailing list and IRC channel. (It was sort of #edev for a while, but
now #musl is more active but half the discussion on it is off topic...)

I'd like an explicit a place to collect and preserve information about
this sort of thing, and a place we can send newbies to ask all the
stupid questions. The main page should teach somebody what embedded
development _is_ and how to do it, starting with how to build and
install the simplest Linux system that boots to a shell prompt (three
packages: linux, musl, and toybox).

Then there should be additional modules people can pick and choose from:
- analyzing an existing system
- creating a chroot with ldd
- your friend strace
- why PID 1 is special, init via shell script, classic system V
init.
- what's in /etc, /var, /usr, /bin, /lib, /tmp
- intro to relevant standards
- what/where are posix, lsb, fhs, elf... plus a ~3 paragraph
summary of each.
- a tour of libc
- readelf -a, static vs dynamic, the dynamic linker
- man 2 vs man 3, stdio subsystem, -lpthread
- a tour of the kernel
- yeah yeah, can 'o worms, but
http://kernel.org/doc/single/lki-single.html
needs a brand new version for 3.x.
- a tour of compiler toolchains
- the six paths, why a docbook->pdf converter and toolchain
aren't different.
- creating a development environment (building binutils, gcc,
make...)
- requirements for self-hosting
- requirements for natively building Linux From Scratch
- cross vs native compiling
- bootstrapping to native compiling under emulation.
- cross compiling for non-x86 systems (with qemu)
- bootloaders, jtags,
- booting a simple gui
- fishing the x11 stuff out of BLFS, booting fvwm or dwm or
something.
- getting client-side networking working
- ifconfig, route, iwlist, iwconfig, maybe wpa-supplicant...
- setting up a server
- iptables
- apache, postfix, samba
- reproducing android userspace
- root vs non-root
- processes, files, suid, sgid, sticky bit
- security nuttiness
- selinux, extended attributes, apparmor, capability bits
- containers: not doing any of that
- why "users" and "groups" wasn't good enough.

- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
- tradeoffs
- code reuse
- transaction granularity
- taking advantage of SMP without going crazy

Yeah yeah, a lot of this is listing stuff I could write, but the point
is there's lots of stuff I can't write and don't know, and there should
be other people who can answer questions...

I bump into stuff like suckless and hope it'll turn into this, but so
far it hasn't...

Rob
Strake
2013-07-16 03:18:20 UTC
Permalink
Post by Rob Landley
- creating a development environment (building binutils, gcc,
make...)
- requirements for self-hosting
- requirements for natively building Linux From Scratch
- cross vs native compiling
- bootstrapping to native compiling under emulation.
This. For me, at least, this is often the greatest hindrance.
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
- tradeoffs
- code reuse
- transaction granularity
- taking advantage of SMP without going crazy
I would be glad to help here.

May find some ideas here:
http://harmful.cat-v.org/software/
LM
2013-07-17 12:07:31 UTC
Permalink
Post by Strake
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
- tradeoffs
- code reuse
- transaction granularity
- taking advantage of SMP without going crazy
I would be glad to help here.
http://harmful.cat-v.org/software/
I hope I didn't open up a can of worms with my last question on PCRE
library versus other regex libraries, but I think it at least lead to some
useful information for the wiki about musl's regex implementation and an
interesting point about NetBSD using TRE and a possibility of sharing bug
fixes with that project. So, I'll just mention this in passing for those
who are interested in C++ in addition to C. (If you prefer C only, feel
free to ignore it.) There's a thread I thought was interesting regarding
using C++ with embedded systems over on LinkedIn
http://www.linkedin.com/groups/C-firmware-development-embedded-system-37565.S.252210483?view=&srchtype=discussedNews&gid=37565&item=252210483&type=member&trk=eml-anet_dig-b_pd-ttl-hdp&fromEmail=&ut=3UNSr99M7Ts5Q1

Hope something takes off with Rob Landley's suggestion about a site for
embedded Linux from Scratch information. I'd be very interested in seeing
further information on pros and cons and efficiencies of using various
libraries and applications (like make style utilities, bash-like shells,
etc.). Would also be useful to know what's been done, what could be redone
better, what's in progress, etc. Seems like there's a very good knowledge
base from the people on the musl mailing list and the members give more
informative responses (regarding performance, efficiency, algorithms,
what's already been coded) than many of the posts I read on some of the LFS
mailing lists. It's especially helpful when you don't have time to comb
through all the code of the various implementations available to just be
able to ask others who have already investigated and found good working
solutions what they recommend. I'd be very interested in hearing opinions
and pros and cons on other utilities and libraries. However, it just feels
off-topic to ask on this list since it's more peripherally related to using
musl even though the most interesting responses would probably come from
here. lfs has a chat list for topics that don't fall into the other lfs
mailing categories. Maybe something similar could be useful or maybe some
resource for newbies to embedded systems could fill the purpose.

Sincerely,
Laura
Rich Felker
2013-07-17 13:58:30 UTC
Permalink
Post by LM
Post by Strake
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
- tradeoffs
- code reuse
- transaction granularity
- taking advantage of SMP without going crazy
I would be glad to help here.
http://harmful.cat-v.org/software/
I hope I didn't open up a can of worms with my last question on PCRE
Not at all.
Post by LM
library versus other regex libraries, but I think it at least lead to some
useful information for the wiki about musl's regex implementation and an
interesting point about NetBSD using TRE and a possibility of sharing bug
fixes with that project.
Actually, the last time I spoke with the author, he informed me he did
not have time to work on it himself, but would appreciate bug fix
patches for upstream. I also had more pressing things to work on, but
I think getting the fixes back to him would be worthwhile. Perhaps
someone else could look at the git log for the bug fixes in musl and
figure out how to do the same upstream; it's not 100% trivial since
musl's version of TRE has been stripped down and simplified quite a
bit.

Rich
James B
2013-07-20 15:17:51 UTC
Permalink
Post by Strake
Post by Rob Landley
- creating a development environment (building binutils, gcc,
make...)
- requirements for self-hosting
- requirements for natively building Linux From Scratch
- cross vs native compiling
- bootstrapping to native compiling under emulation.
This. For me, at least, this is often the greatest hindrance.
That makes two of us. There are many tools for making cross compiler
(aboriginal, crosstools-ng, buildroot, etc) but I haven't found one that
guides how to move to native compiling (=create the native compilers) once
one has the cross-compilers and bootable rootfs (I know, aboriginal *does*
create native compilers so I should read Rob's scripts for that ...).

That being said, the other topics are pretty relevant too.

Also, anyone thinks that CLFS is good start for this? One thing that I
notice about (C)LFS is that the steps are there but the rationale and
explanation isn't; so it encourages people to follow a recipe without
knowing *why* things have to be done in a certain way (to be fair, the main
LFS (not its CLFS variants) does have some kind of explanation but it could
be improved).

cheers!
Andrew Bradford
2013-07-22 12:27:56 UTC
Permalink
Post by James B
Post by Strake
Post by Rob Landley
- creating a development environment (building binutils, gcc,
make...)
- requirements for self-hosting
- requirements for natively building Linux From Scratch
- cross vs native compiling
- bootstrapping to native compiling under emulation.
This. For me, at least, this is often the greatest hindrance.
That makes two of us. There are many tools for making cross compiler
(aboriginal, crosstools-ng, buildroot, etc) but I haven't found one that
guides how to move to native compiling (=create the native compilers) once
one has the cross-compilers and bootable rootfs (I know, aboriginal *does*
create native compilers so I should read Rob's scripts for that ...).
That being said, the other topics are pretty relevant too.
Also, anyone thinks that CLFS is good start for this? One thing that I
notice about (C)LFS is that the steps are there but the rationale and
explanation isn't; so it encourages people to follow a recipe without
knowing *why* things have to be done in a certain way (to be fair, the main
LFS (not its CLFS variants) does have some kind of explanation but it could
be improved).
I'm currently one of the only developers currently working on the CLFS
Embedded book [1]. I'd happily take patches to describe why things work
they way they do and why the steps are what they are. The biggest
hindrance the CLFS embedded book has is a lack of both developer time
and experience, the main core people who started the embedded book
haven't contributed much in the past few years I assume due to other
commitments.

[1]:http://cross-lfs.org/view/clfs-embedded/

We're (2 or 3 of us) currently in the middle of a bunch of discussions
on #cross-lfs IRC and scattered around clfs-dev ml regarding moving from
uClibc to something else. I tried some builds with glibc and realized
that's not really a decent choice, even today. I got pointed to musl by
rofl0r on Github (sorry, don't know their real name). We're now working
though coming up to speed on musl and learning how to build it such that
we can consider it for use on MIPS, x86 and ARM for the book [2].

[2]:http://lists.cross-lfs.org/pipermail/clfs-dev-cross-lfs.org/2013-July/001517.html

If the CLFS embedded book is a good starting point, please feel free to
send patches, I'm happy to take them. Although based on reading through
the archive it looks like the real goal is bigger than what CLFS
embedded's real goals are: like talking about jtags, emulated
bootstraps, native compilers, etc.

If CLFS embedded isn't the right place to start, maybe deprecating the
CLFS embedded book and moving to the proposed wiki would be something to
consider? Especially if the developer time / experience is higher and
if the final product would cover more of the "why" and not just the
steps. I'd be happy to contribute to such a resource where I can.

Thanks,
Andrew
Rob Landley
2013-07-22 04:40:11 UTC
Permalink
Post by Strake
Post by Rob Landley
- creating a development environment (building binutils, gcc,
make...)
- requirements for self-hosting
- requirements for natively building Linux From Scratch
- cross vs native compiling
- bootstrapping to native compiling under emulation.
This. For me, at least, this is often the greatest hindrance.
It's a fairly hard part. My whole aboriginal linux project was an
investigation of what's actually involved here and how to do it. Now
that the investigation's complete (or at least reached a reasonable "it
works now, and I can't think of an obviously better wya to do it with
the tools at hand" stopping point), I suspect there's a better way of
explaining it than just "go read this giant pile of shell scripts that
I got to work".

So I should write up what's involved, and how I determined the
requirements...
Post by Strake
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
- tradeoffs
- code reuse
- transaction granularity
- taking advantage of SMP without going crazy
I would be glad to help here.
What did you have in mind?
Post by Strake
http://harmful.cat-v.org/software/
I read the original "cat -v considered harmful" which is why I did
"catv" in busybox many years ago, but that was a paper by one of the
original bell labs guys. This guy is just collecting random papers.

While I admire the attitude, I've never found that site particularly
useful. There's no pragmatism at all in his approach, he doesn't
recommend things you can actually _use_, just platitudes. He recommends
tcc instead of gcc, which doesn't build even 1% as many real world
software packages. (I joined the tcc mailing list right after tccboot
hit slashdot circa 2004, and I spent 3 years maintaining a tcc fork
after Fabrice moved on to QEMU. I know exactly why it's NOT a real
world replacement for gcc right now, what would be required to get
minimal functionality out of it, and why the current development team
will never do so.) Similarly he recommends uclibc and dietlibc instead
of glibc with no discussion of the tradeoffs... musl exists because
they're not good enough.

What I'm hoping out of the new embedded newbies stuff is things people
can actually do/use. Even the theory should lead to practical advice,
immediately applicable. (It just explains _why_ you want to do it that
way, and what happens if you don't.)

Rob
Strake
2013-07-23 00:12:39 UTC
Permalink
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
This. Too, why not glib, and other such garbage.
Post by Rob Landley
Post by Rob Landley
- tradeoffs
- code reuse
including, particularly, polymorphism and composability.
Post by Rob Landley
Post by Rob Landley
- transaction granularity
- taking advantage of SMP without going crazy
I leave these to someone less ignorant on the matter.

I would note too that computers are meant to save our time, including,
perhaps above all, the hackers who write code for them. This often
seems ignored or forgotten.
Rob Landley
2013-07-27 00:58:28 UTC
Permalink
Post by Strake
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
This. Too, why not glib, and other such garbage.
Never having used glib, I'm not qualified to warn people away from it.
I _have_ used C++ fairly extensively and already wrote up a banishment
ritual.
Post by Strake
Post by Rob Landley
Post by Rob Landley
- tradeoffs
- code reuse
including, particularly, polymorphism and composability.
I don't know what you mean by this. (I remember the buzzwords, but...)

By "code reuse" I meant it's very easy to suck in a lot of code you
never have a first user for by grabbing a library that does 1000 things
of which you need 3. Environmental dependencies are a form of code
complexity, but it's invisible because you seem virtuous by requiring
the whole gnome library suite for what turns out to be a network daemon.

Alternately, "infrastructure in search of a user" is as bad as
premature optimization: hold off writing code until you actually need
it.

Otherwise the unused code will sit there and bit-rot, never tested or
regression tested by anything, making it harder to change your design
in response to real world needs both by bulking out the code you need
to rewrite to accomodate design changes, and by chaffing the system
about what your real world needs actually _are_ since half the code is
serving imaginary needs. Plus you have code you're afraid to touch
because you can't test whether or not your changes break users you
can't find; showing nothing _does_ use it after the fact is proving a
negative, notoriously difficult.
Post by Strake
Post by Rob Landley
Post by Rob Landley
- transaction granularity
- taking advantage of SMP without going crazy
I leave these to someone less ignorant on the matter.
I would note too that computers are meant to save our time, including,
perhaps above all, the hackers who write code for them. This often
seems ignored or forgotten.
My aboriginal linux 260 slide presentation described why native
compiling under emulation is better than cross compiling. One reason
was throwing processor time at the problem instead of throwing
engineering time at the problem. Moore's Law helps with one of these.

Rob
Strake
2013-07-27 02:01:48 UTC
Permalink
Post by Rob Landley
Post by Strake
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
This. Too, why not glib, and other such garbage.
Never having used glib, I'm not qualified to warn people away from it.
I used it little, just to hack surf and jumanji, but I found
insanely_long_function_names, poor docs of what allocates or frees
what, wanton type synonyms, and generally a tangled mess.

I gave up.

This may not be glib alone, but glib surely seems guilty too.
Post by Rob Landley
I _have_ used C++ fairly extensively
this -> beSorry ();
Post by Rob Landley
Post by Strake
including, particularly, polymorphism and composability.
I don't know what you mean by this. (I remember the buzzwords, but...)
Polymorphism: not need to rewrite essentially the same code for each type.
Haskell wins at this, but is not quite a systems language (^_~)
C at least has void pointers, which work in some cases.

Composability: write functions to do one thing well, and have them
call other functions, perhaps passed as arguments, to do other things,
or better yet, not do them at all. For example: "Damn, I wish I could
define my own comparator/reader/whatever function here!"
Post by Rob Landley
By "code reuse" I meant it's very easy to suck in a lot of code you
never have a first user for by grabbing a library that does 1000 things
of which you need 3. Environmental dependencies are a form of code
complexity, but it's invisible because you seem virtuous by requiring
the whole gnome library suite for what turns out to be a network daemon.
Yes, so that particular library loses, but factorization wins.
Post by Rob Landley
Alternately, "infrastructure in search of a user" is as bad as
premature optimization: hold off writing code until you actually need
it.
Worse: it may never save any time at all!
Post by Rob Landley
My aboriginal linux 260 slide presentation described why native
compiling under emulation is better than cross compiling. One reason
was throwing processor time at the problem instead of throwing
engineering time at the problem. Moore's Law helps with one of these.
Ah yes, "engineer competence doubles every 18 months" (^_^)

Cheers,
Strake
Rich Felker
2013-07-27 02:50:25 UTC
Permalink
Post by Strake
Post by Rob Landley
Post by Strake
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
This. Too, why not glib, and other such garbage.
Never having used glib, I'm not qualified to warn people away from it.
glib is basically the C++ STL written in C, but lacking exceptions so
that there's no way to handle errors.
Post by Strake
I used it little, just to hack surf and jumanji, but I found
insanely_long_function_names, poor docs of what allocates or frees
what, wanton type synonyms, and generally a tangled mess.
I gave up.
While debugging the heap-check crash that turned out to be memalign, I
dug into the glib and libxml2 code a bit. Just casually inspecting
less than 500 lines, I found cases of UB that don't break now but will
break down the road with fancier compilers, lack of synchronization
where needed, and various other small to medium bugs, not to mention
75%-redundant code in multiple code paths (lack of any proper
factoring). Offhand I would guess the whole GNOME family of code has
something like 4-10 bugs per 100 LoC....

Rich
Rob Landley
2013-07-29 20:01:23 UTC
Permalink
Post by Rob Landley
Post by Rob Landley
Post by Strake
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and
autoconf
Post by Rob Landley
Post by Strake
This. Too, why not glib, and other such garbage.
Never having used glib, I'm not qualified to warn people away
from it.
glib is basically the C++ STL written in C, but lacking exceptions so
that there's no way to handle errors.
If we have a "libraries you might want to look at" page, we probably
want to have a "libraries we'd like to warn you away from" page. And
that's a marvelous summary of glib on such a page.
Post by Rob Landley
While debugging the heap-check crash that turned out to be memalign, I
dug into the glib and libxml2 code a bit. Just casually inspecting
less than 500 lines, I found cases of UB that don't break now but will
break down the road with fancier compilers, lack of synchronization
where needed, and various other small to medium bugs, not to mention
75%-redundant code in multiple code paths (lack of any proper
factoring). Offhand I would guess the whole GNOME family of code has
something like 4-10 bugs per 100 LoC....
Gnome is GNU:

http://en.wikipedia.org/wiki/GNU_Project#GNOME

So of course the code's crap. GNU is a political project, not an
engineering project. The technology is never the focus of the effort,
and always subservient to other interests.

Rob
Rob Landley
2013-07-29 19:54:51 UTC
Permalink
Post by Strake
Post by Rob Landley
Post by Strake
Post by Rob Landley
What did you have in mind?
Post by Rob Landley
- efficient (elegant) programming
- Why C and scritpting languages, why NOT C++ and autoconf
This. Too, why not glib, and other such garbage.
Never having used glib, I'm not qualified to warn people away from
it.
I used it little, just to hack surf and jumanji, but I found
insanely_long_function_names, poor docs of what allocates or frees
what, wanton type synonyms, and generally a tangled mess.
I gave up.
This may not be glib alone, but glib surely seems guilty too.
Post by Rob Landley
I _have_ used C++ fairly extensively
this -> beSorry ();
Post by Rob Landley
Post by Strake
including, particularly, polymorphism and composability.
I don't know what you mean by this. (I remember the buzzwords,
but...)
Polymorphism: not need to rewrite essentially the same code for each type.
Haskell wins at this, but is not quite a systems language (^_~)
C at least has void pointers, which work in some cases.
C++ templates don't make you write the same code for each type, instead
they generate code for each type bloating the executable tremendously
and making you reverse engineer their code generation when a bug
happens in the middle of it or you have to trace back through it to
understand what the code actually did.

Java has a similar failure where they use templates to punch holes in
their type system and the result is your tools generate buckets of
template code so one year old java projects with three developers with
more than a million lines of code are not actually that unusual.

The difinitive (long) Java takedown:
http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html
Post by Strake
Composability: write functions to do one thing well, and have them
call other functions, perhaps passed as arguments, to do other things,
or better yet, not do them at all. For example: "Damn, I wish I could
define my own comparator/reader/whatever function here!"
Um, unix has this at the command line level. C has had this from day 1
(it's why it has function pointers). Nobody ever needed a buzzword for
it, because it's not special.
Post by Strake
Post by Rob Landley
By "code reuse" I meant it's very easy to suck in a lot of code you
never have a first user for by grabbing a library that does 1000
things
Post by Rob Landley
of which you need 3. Environmental dependencies are a form of code
complexity, but it's invisible because you seem virtuous by
requiring
Post by Rob Landley
the whole gnome library suite for what turns out to be a network
daemon.
Yes, so that particular library loses, but factorization wins.
"factorization" is a word now?
Post by Strake
Post by Rob Landley
Alternately, "infrastructure in search of a user" is as bad as
premature optimization: hold off writing code until you actually
need
Post by Rob Landley
it.
Worse: it may never save any time at all!
It generally costs time.
Post by Strake
Post by Rob Landley
My aboriginal linux 260 slide presentation described why native
compiling under emulation is better than cross compiling. One reason
was throwing processor time at the problem instead of throwing
engineering time at the problem. Moore's Law helps with one of
these.
Ah yes, "engineer competence doubles every 18 months" (^_^)
Sometimes the tools get better. But often they go down blind allies,
and then refuse to back out of their cul-de-sac because they made
_progress_ for a year or so before encountering the dead end, and they
refuse to abandon all that work they've done on the properties of
caloric fluid migrating through the ether.

Rob
Strake
2013-07-30 01:35:26 UTC
Permalink
Post by Rob Landley
Post by Strake
Polymorphism: not need to rewrite essentially the same code for each type.
Haskell wins at this, but is not quite a systems language (^_~)
C at least has void pointers, which work in some cases.
C++ templates don't make you write the same code for each type, instead
they generate code for each type bloating the executable tremendously
and making you reverse engineer their code generation when a bug
happens in the middle of it or you have to trace back through it to
understand what the code actually did.
Java has a similar failure where they use templates to punch holes in
their type system and the result is your tools generate buckets of
template code so one year old java projects with three developers with
more than a million lines of code are not actually that unusual.
http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html
I doubt it not, but I said nil of C++ and Java in that segment.
Post by Rob Landley
Post by Strake
Composability: write functions to do one thing well, and have them
call other functions, perhaps passed as arguments, to do other things,
or better yet, not do them at all. For example: "Damn, I wish I could
define my own comparator/reader/whatever function here!"
C has had this from day 1 (it's why it has function pointers).
Oh, it's surely possible, but at times forgotten.
Too, (passing pointers, clearing errno, ...) to kludge over C's lack
of (tuples|multiple return values) can break this.
Post by Rob Landley
Nobody ever needed a buzzword for it, because it's not special.
It's not special, but it may be noteworthy, particularly when teaching
or at least telling good practice, as we aim to here.
Post by Rob Landley
unix has this at the command line level.
Yes, but some utilities fail at it. Sort, for example, has insanely
many flags, and nevertheless fails to cover common usage cases. If
rather sort worked thus:

$ sort comparator argu ...

where comparator is some program, and argu ... its arguments, it would
not need those flags.
Post by Rob Landley
Post by Strake
Yes, so that particular library loses, but factorization wins.
"factorization" is a word now?
Yep, modern English has Latin in it.
Post by Rob Landley
Post by Strake
Ah yes, "engineer competence doubles every 18 months" (^_^)
Sometimes the tools get better. But often they go down blind allies,
and then refuse to back out of their cul-de-sac because they made
_progress_ for a year or so before encountering the dead end, and they
refuse to abandon all that work they've done on the properties of
caloric fluid migrating through the ether.
And sometimes the engineers ignore the tools...

Cheers,
Strake
Rob Landley
2013-08-01 06:20:57 UTC
Permalink
Post by Rob Landley
Post by Rob Landley
Post by Strake
Polymorphism: not need to rewrite essentially the same code for
each
Post by Rob Landley
Post by Strake
type.
Haskell wins at this, but is not quite a systems language (^_~)
C at least has void pointers, which work in some cases.
C++ templates don't make you write the same code for each type,
instead
Post by Rob Landley
they generate code for each type bloating the executable
tremendously
Post by Rob Landley
and making you reverse engineer their code generation when a bug
happens in the middle of it or you have to trace back through it to
understand what the code actually did.
Java has a similar failure where they use templates to punch holes
in
Post by Rob Landley
their type system and the result is your tools generate buckets of
template code so one year old java projects with three developers
with
Post by Rob Landley
more than a million lines of code are not actually that unusual.
http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html
I doubt it not, but I said nil of C++ and Java in that segment.
You were using their buzzwords.
Post by Rob Landley
Post by Rob Landley
Post by Strake
Composability: write functions to do one thing well, and have them
call other functions, perhaps passed as arguments, to do other
things,
Post by Rob Landley
Post by Strake
or better yet, not do them at all. For example: "Damn, I wish I
could
Post by Rob Landley
Post by Strake
define my own comparator/reader/whatever function here!"
C has had this from day 1 (it's why it has function pointers).
Oh, it's surely possible, but at times forgotten.
Too, (passing pointers, clearing errno, ...) to kludge over C's lack
of (tuples|multiple return values) can break this.
C is based on static typing and static memory management, with the
static structures, arrays, and pointers as its main data composition
mechanisms.

Scripting languages like python/ruby/lua use dynamic typing and dynamic
memory managment, which means they can include resizeable containers as
first class types. So instead of using structs, they use dictionaries
to associate names with values, and abstract away the underlying
implementation mechanism. (Is it a hash table or a tree? Who cares, it
just works.)

In between "entirely manual" and "fully automated" is the demilitarized
zone C++ inhabits where it's got automation that sort of works, but
only if you understand how it's implemented. To leverage the installed
base of C, they tried to build additional automation on top of
_pointers_ (not references), and it didn't work. Anywhere its
abstractions actually hide implementation details, you wind up with
elaborate magic rules that must be followed or things inexplicably
break and it's your fault.

So talking about kludging over C's lack of some feature is like saying
you're kludging over assembly language's lack of a feature. Intel did a
chip that implemented object support inthe hardware, it was called the
Intel 432 it turned out to be unusably slow. Using the tool that's fast
while complaining about what makes it fast is silly.

If you don't want to do everything manually, there are plenty of
languages that allow you not to. They run at about 1/5 the speed of C,
and yes that includes the optimized subsets of javascript once you
strip out the hype and carefully crafted benchmarks. (The fact that 1/5
the speed of a gigahertz machine is 200 mhz and Quake ran fine on those
at low resolution can take people a while to notice; until their
battery dies.)
Post by Rob Landley
Post by Rob Landley
Nobody ever needed a buzzword for it, because it's not special.
It's not special, but it may be noteworthy, particularly when teaching
or at least telling good practice, as we aim to here.
I'm trying to figure out if "I didn't learn C using object oriented
buzzwords" means "you don't need object oriented buzzwords to teach C".
I have the disadvantage of being old here.

That said, teaching C++ and thinking that means you've taught C was a
massive disservice to a generation of programmers. A mud pie is not a
beverage, even if you make it with bottled water. You can _add_ fail to
something.
Post by Rob Landley
Post by Rob Landley
unix has this at the command line level.
Yes, but some utilities fail at it. Sort, for example, has insanely
many flags, and nevertheless fails to cover common usage cases. If
$ sort comparator argu ...
where comparator is some program, and argu ... its arguments, it would
not need those flags.
sort <(ls -f)
Post by Rob Landley
Post by Rob Landley
Post by Strake
Yes, so that particular library loses, but factorization wins.
"factorization" is a word now?
Yep, modern English has Latin in it.
Ah, the bellum donum of the puella agricola. (My wife's had 4 years of
latin recently, and half of fadeaccompli.dreamwidth.org is translating
Catallus these days. Me, I just had the one in high school.)

English also had greek, celtic, gaelic, various scottish dialects, the
angles and the jutes (multiple times including the norsemen bouncing
off northern france), some deeply misguided germans, and that's before
the empire where the malaria drugs became a happy tradition of "gin and
tonic" bringing phrases and diseases from around the globe. (It's been
years since my english minor and the history of english class is a bit
fuzzy, I remember how to pronounce "gedaewhamlichan", which more or
less meant "daily" in old english, but not the correct way to spell it.)

That said, factorization is still pure buzzword in this context.
Post by Rob Landley
Post by Rob Landley
Post by Strake
Ah yes, "engineer competence doubles every 18 months" (^_^)
Sometimes the tools get better. But often they go down blind allies,
and then refuse to back out of their cul-de-sac because they made
_progress_ for a year or so before encountering the dead end, and
they
Post by Rob Landley
refuse to abandon all that work they've done on the properties of
caloric fluid migrating through the ether.
And sometimes the engineers ignore the tools...
It's been a while since they improved on the hammer.

Rob
Strake
2013-08-03 16:52:57 UTC
Permalink
Post by Rob Landley
Post by Strake
I doubt it not, but I said nil of C++ and Java in that segment.
You were using their buzzwords.
"Polymorphism" isn't theirs; they just use it.
Post by Rob Landley
Post by Strake
Oh, it's surely possible, but at times forgotten.
Too, (passing pointers, clearing errno, ...) to kludge over C's lack
of (tuples|multiple return values) can break this.
C is based on static typing and static memory management, with the
static structures, arrays, and pointers as its main data composition
mechanisms.
Yes. Notably, it lacks tuples, which are essentially anonymous structures.
Post by Rob Landley
Scripting languages like python/ruby/lua use dynamic typing and dynamic
memory managment, which means they can include resizeable containers as
first class types.
Well, yes, but dynamic memory allocation alone is enough; a language
can easily have static types and resizable containers, for example
Haskell.
Post by Rob Landley
So talking about kludging over C's lack of some feature is like saying
you're kludging over assembly language's lack of a feature.
I can return multiple values in asm. I ought to be able to do so in C.
Post by Rob Landley
Using the tool that's fast while complaining about what makes it fast is silly.
Not complaining about what makes it fast, which to my knowledge is
imperative nature and explicit allocation.
Post by Rob Landley
I'm trying to figure out if "I didn't learn C using object oriented
buzzwords" means "you don't need object oriented buzzwords to teach C".
It does.
Post by Rob Landley
That said, teaching C++ and thinking that means you've taught C was a
massive disservice to a generation of programmers. A mud pie is not a
beverage, even if you make it with bottled water.
Heh. At Carleton University, in the first-year coding class for
engineers, they teach a little C + iostream and call it C++.
Post by Rob Landley
Post by Strake
$ sort comparator argu ...
where comparator is some program, and argu ... its arguments
sort <(ls -f)
I meant not what it sorts, but how it sorts it.
Post by Rob Landley
That said, factorization is still pure buzzword in this context.
Please define "buzzword" so I can shun them in future messages.
Post by Rob Landley
It's been a while since they improved on the hammer.
True, but not every fastener is a nail.

LM
2013-07-16 11:50:29 UTC
Permalink
Post by Rob Landley
I'd like an explicit a place to collect and preserve information about
this sort of thing, and a place we can send newbies to ask all the stupid
questions. The main page should teach somebody what embedded development
_is_ and how to do it, starting with how to build and install the simplest
Linux system that boots to a shell prompt (three packages: linux, musl, and
toybox).
Sounds like a great idea. Would be interested in reading articles on some
of the topics mentioned. Sites like suckless.org state what they consider
are better and worse software choices. Would be nice to see some actual
statistics and rationale backing up what is considered better or worse
design. For instance, there are some negative mentions about the PCRE
library, but when I tried to track down the cons for using it, I only found
dated performance comparisons showing how poorly it worked if you don't use
the newer JIT implementation. What might be a positive for a system that's
optimized for a particular processor might be a negative if you're
interested in software that ports to multiple processors and vice versa.
Musl's useful not just for embedded systems but for older machines that
want to run efficient desktop environments. However, what works for a
desktop environment might not work well for an embedded system and so on.
Would like to see actual lists of pros and cons, less opinions and let the
user decide if the software is really a bad fit with his/her needs or not.

Would also love to see a forum where one could discuss pros and cons of
various software and library choices, alternatives already out there and if
the user wants to rewrite some of these himself or herself for specific
needs, a place to discuss design issues.

There is an lfs-chat list. Think it would probably be a good idea to post
something about the idea of an LFS for embedded systems there and see if
any of the regular LFS users would be interested in getting involved. A
start might be to take the outline of possible topics Rob Landley supplied,
put it up on a wiki and see if people will volunteer to fill in some of the
blanks. Might also be useful to get together a list of what tasks need to
be done to get something started and ask for actual volunteers for each
task to help get things rolling. I do think a mailing list or forum would
be useful as well. That way, one can get discussions going and brainstorm
ideas about how best to program something or find information on a topic.
I tend to prefer mailing lists and forums to IRC. It's easier to read
through past information.

I've been talking with another developer about the possibility of building
(yet another) lightweight Linux distribution for older machines. I really
haven't been happy with what's currently out there. The average definition
of a lightweight Linux desktop for older machines is to use a lot of GTK+
programs (with a lightweight desktop like XFCE (not my definition of
lightweight), LXDE or razorQT) and even interpreted programs (as long as
they look like they're in console mode or like they might somehow be
lighter or more useful than their compiled equivalents). They typically
use the KISS principle which means (according to their take on it) I'm
stuck with the one graphics editor, the one music player, etc. that the
distribution creator happens to like. A Gimp or a Photoshop style program
has a lot of functionality. So does an Office Suite like LibreOffice. If
you're going to replace heavyweights with a program that does one thing
well, you're typically going to need more than one application with each
application designed to perform a specific piece of the functionality
well. You need more than one type of graphics program if you're doing
serious graphics editing, more than one type of music program if you're
doing serious music creation, etc. A lot of the topics such as how to put
together a system from scratch, what boot and init programs to go with,
which userspace utilities to use, which package manager to use, which
libraries are efficient would be of great interest for the project.
Another concern to me is which projects are open to accepting patches and
which aren't so open, making it prudent to look into more friendly
alternatives. I'd also been interested in discussing when it pays to
rewrite something from scratch and when it's better to reuse what's already
been done. I've been picking up ideas by looking at the code embedded
systems use. However, the end goal for this particular project is not an
embedded system but a GUI desktop that an average end user will be
comfortable working with. There's a lot of overlap, but definitely
different goals with different design tradeoffs.

Hope the idea to document and share many of the topics mentioned takes
off. Think it would make a very nice resource for certain types of
developers.

Sincerely,
Laura
http://www.distasis.com/cpp
Szabolcs Nagy
2013-07-16 13:56:47 UTC
Permalink
Post by LM
design. For instance, there are some negative mentions about the PCRE
library, but when I tried to track down the cons for using it, I only found
dated performance comparisons showing how poorly it worked if you don't use
the newer JIT implementation. What might be a positive for a system that's
the pcre thing is a design decision that makes the worst
case asymptotic complexity exponential, the jit does not
help and benchmarks are irrelevant: they are about the
common case

russ cox gave a clear explanation:

http://swtch.com/~rsc/regexp/regexp1.html
http://swtch.com/~rsc/regexp/regexp2.html
http://swtch.com/~rsc/regexp/regexp3.html
http://swtch.com/~rsc/regexp/regexp4.html

jit can only speed up the execution of a compiled pattern
by some constant factor, it is also much more complex and
has greater startup cost than a classic nfa based engine

to fix the problem you need a different algorithm
(of course then many of the pcre features would be hard
to support)

if the regex input source is not in your control then
you should worry about worst-case performance, not the
average case one

if you check out the pcre benchmarks you can note that
it explicitly states that no "pathological" patterns were
used (ie ones which would make backtracking exponential)

http://sljit.sourceforge.net/regex_perf.html

and this is where the issue turns into an ideological debate:
should we train people how to avoid pathological cases or
should the algorithm guarantee good worst case performance on
any bounded input
(ppl seems to prefer instant gratification and common case
performance usually, but in a safety critical environment
you care about the worst-case more)
Rich Felker
2013-07-16 14:00:53 UTC
Permalink
Post by LM
design. For instance, there are some negative mentions about the PCRE
library, but when I tried to track down the cons for using it, I only found
dated performance comparisons showing how poorly it worked if you don't use
the newer JIT implementation. What might be a positive for a system that's
The whole concept of regular expressions is that they're regular,
meaning they're matchable in O(n) time with O(1) space. PCRE (the
implementation) uses backtracking for everything, giving it
exponentially-bad performance (JIT cannot fix this), and PCRE (the
language) has a lot of features that are fundamentally not regular and
thus can't be implemented efficiently. Also, the behavior of some of
the features (e.g. greedy vs non-greedy matching) were not designed
intentionally but just arose out of the backtracking implementation,
and thus don't make a lot of sense unless you think from the
standpoint of such an implementation.

Aside from performance, PCRE is harmful to CS education because it
undermines the whole definition of "regular" when students learn a
sense of "regular expression" that's not actually regular. Of course
this can be worked around if the instructor teaches this issue well
when teaching PCRE, but I think normally PCRE is just taught as a tool
without any theoretical background.

Rich
Strake
2013-07-16 17:49:55 UTC
Permalink
Sites like suckless.org state what they consider are better and worse software choices.
Would be nice to see some actual statistics and rationale backing up what is considered better or worse design.
No statistics, but they surely have rationale: http://suckless.org/sucks
Musl's useful not just for embedded systems but for older machines that
want to run efficient desktop environments. However, what works for a
desktop environment might not work well for an embedded system and so on.
True, but small code wins everywhere, for it fits more easily in cache.
Rob Landley
2013-07-22 06:00:04 UTC
Permalink
Post by Rob Landley
Post by Rob Landley
I'd like an explicit a place to collect and preserve information
about
Post by Rob Landley
this sort of thing, and a place we can send newbies to ask all the
stupid
Post by Rob Landley
questions. The main page should teach somebody what embedded
development
Post by Rob Landley
_is_ and how to do it, starting with how to build and install the
simplest
Post by Rob Landley
Linux system that boots to a shell prompt (three packages: linux,
musl, and
Post by Rob Landley
toybox).
Sounds like a great idea. Would be interested in reading articles on some
of the topics mentioned. Sites like suckless.org state what they consider
are better and worse software choices.
I lurked on #suckless irq channel on OTP or whatever it was for a week.
It seems to be support for some window manager (dwm?). Nothing else was
ever discussed...
Post by Rob Landley
Would be nice to see some actual
statistics and rationale backing up what is considered better or worse
design. For instance, there are some negative mentions about the PCRE
library, but when I tried to track down the cons for using it, I only found
dated performance comparisons showing how poorly it worked if you don't use
the newer JIT implementation.
The great thing about Linux From Scratch is it's practical. It's a
procedure you can actually reproduce for yourself, and when you try it
you get a running system that you built and are in a position to
modify. It mostly explains why you're doing what you're doing, and
provides some alternatives along the way.

But Linux From Scratch 3.x was a better learning experince than the
current one, because these days it's much biger and much more
complicated to get a running system, and you don't really learn much
more. Plus the "hints" files about things like BSD init scripts are
sort of deprecated now. And it doesn't really present stuff like tcl as
optional, even though it's only ever used to run test suites...

Beyond Linux From Scratch is about adding stuff to the base linux
system, but there's nothing in there about _removing_ stuff. Or
swapping out base packages for alternatives. (Again, the "hints" used
to go into this, but they seem to have tailed off over the past few
years...)

Oh, we should totally be linking to
http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html and
possibly trying to reproduce a current version under 3.x kernels.

A lot of stuff, anybody can take and just do the legwork. For example,
we really need a current version of
https://www.kernel.org/doc/mirror/lki-single.html and somebody could
just _take_ that and compare it with the current kernel and do an
updated version based on what they learn by reading current kernel
source using the old 2.4 version as a guide...
Post by Rob Landley
What might be a positive for a system that's
optimized for a particular processor might be a negative if you're
interested in software that ports to multiple processors and vice versa.
I've yet to find a per-processor optimization that buys you one
interation of Moore's Law.

And I _have_ seen years of seesawing back and forth over "here's a
lookup table of 256 angles your ship can be at where we've done all the
triginometry... oh the new processor has a floating point coprocessor
and tiny l1 cache so it's actually faster to calculate it than thrash
the cache with our lookup table, oh now the NEW processor has a big L2
cache so the lookup table is faster again, oh now they've added 3D
hardware so all this mess is slower than having the 3D hardware do
it..." I've seen optimizations where the pendulum went back and forth a
half dozen times on the same issue with RISC vs CISC and whether loop
unrolling is a win and...

And what it keeps coming back to is "simple code you understand beats
clever code you don't". Do a simple implementation, let the compiler
optimize it for you, the bit you're doing is not the hard part (if it
is, you're doing it wrong, which means back up and rethink), so just
get it done and stay out of the way while Big Important Programmers do
things they find Terribly Hard To Do because they're straining at gnats
and all that...

(Premature optimization is the root of all evil, when in doubt use
brute force, etc.)
Post by Rob Landley
Musl's useful not just for embedded systems but for older machines that
want to run efficient desktop environments. However, what works for a
desktop environment might not work well for an embedded system and so on.
Knoppix was a fine desktop. I used it as such for over two years. I
installed it to the hard drive because running from CD was slow and
tying up the CD drive was inconvenient, but operating under a space
constraint on the image size meant they had to figure out what they
really NEEDED, and it made a better system.
Post by Rob Landley
Would like to see actual lists of pros and cons, less opinions and let the
user decide if the software is really a bad fit with his/her needs or not.
I wouldn't presume to advise people without knowing what they wanted to
use a system _for_. For an awful lot of people Red Hat Enterprise is
the correct choice, albeit for financial rather than technical reasons.
(You know why Red Hat drove Sun Microsystems out of business, right?)
Post by Rob Landley
Would also love to see a forum where one could discuss pros and cons of
various software and library choices, alternatives already out there and if
the user wants to rewrite some of these himself or herself for
specific
needs, a place to discuss design issues.
I'm not sure you're asking well-defined questions here.

Ok, simple mindset: complexity is a cost. Treat complexity as something
you spend to get functionality, and you're 3/4 of the way to making
good decisions.

There's some fuzziness in measuring complexity, but lines of source
code maps pretty well to "amount of human thought required to
understand what this thing is doing". If you have a project that gets
the job done with 100,000 lines of code, and another one that requires
2 million lines of code, to _me_ it's pretty darn clear which of the
two is superior.

You can then say 'but the big one has features X, Y, and Z I need, and
we benchmarked the performance in our deployment environment and the
big one performs 12.7% faster", and then you go "do you really need
those features? How much work would adding them to the small one be,
and would the upstream project take it or just consider it bloat, and
if you were to maintain your own patchset to add that feature to the
small one would it change your answer about whether or not you actually
need it?"

And of course there's complexity you directly engage with and
complexity you indirectly engage with; your _local_ complexity may be
"this giant black box works for everybody else so just using it is very
easy for us as long as it never breaks, and if it does we have a vendor
for that". And of course if you _are_ the vendor, deploying dropbear
instead of openssh can have a negative PR effect because openssh is The
Standard but if that's such a big deal why aren't you using Windows...

And really all this infrastructure is generally stuff that should just
work, and should be an existing solved problem so you can focus on your
app...

See how it spirals through a gazillion topics? As I said: not sure what
questions you're really asking.
Post by Rob Landley
There is an lfs-chat list. Think it would probably be a good idea to post
something about the idea of an LFS for embedded systems there and see if
any of the regular LFS users would be interested in getting involved.
A
start might be to take the outline of possible topics Rob Landley supplied,
put it up on a wiki and see if people will volunteer to fill in some of the
blanks. Might also be useful to get together a list of what tasks need to
be done to get something started and ask for actual volunteers for each
task to help get things rolling. I do think a mailing list or forum would
be useful as well. That way, one can get discussions going and brainstorm
ideas about how best to program something or find information on a topic.
I tend to prefer mailing lists and forums to IRC. It's easier to read
through past information.
Good concrete questions to answer are a good start. Not "maybe people
would want to know X" but "I want to know X."
Post by Rob Landley
I've been talking with another developer about the possibility of building
(yet another) lightweight Linux distribution for older machines. I really
haven't been happy with what's currently out there.
Aboriginal Linux is the lightest-weight development environment I know
how to make. (And switching to musl should make it lighter.)
Post by Rob Landley
The average definition
of a lightweight Linux desktop for older machines is to use a lot of GTK+
programs (with a lightweight desktop like XFCE (not my definition of
lightweight), LXDE or razorQT) and even interpreted programs (as long as
they look like they're in console mode or like they might somehow be
lighter or more useful than their compiled equivalents).
X11 is a windowing system. It draws graphics onna screen; lines, fonts,
boxes, stamping images from bitmaps and such, and the bitblts and
double buffering used for dragging and scrolling windows and such.

Then you have a window manager that draws borders and title bars and
menus, and gives them behavior so when you grab the corner and drag it
the window resizes, or grab the title bar the window moves, or handles
the z-order stuff so windows draw in front of other windows (which
pragmatically means you hide or clip window areas and only draw parts
of 'em).

Then you have a toolkit, which is a shared library of graphics
primitives and associated behavior when they get mouseovers or clicks
or keys on the keyboard are pressed while it has focus. (Window manager
defines what "focus" is and sending keypresses and clicks to the right
thing.) Your toolkit is where you find code to implement a button or a
scrollbar or a pulldown menu.

Then you have a desktop program, which is the thing that runs to _use_
X11, a window manager, and a toolkit to provide behavior for an
otherwise empty screen. It provides the bar alogn the top that shows
you your list of pen windows, and provides a menu of stuff you can
launch, and a dock for tiny icons associated with programs that know
about that type of desktop and can do the right transaction with it to
register themselves.

I'm running the xubuntu linux distro. It's using xfce as the desktop
program, which uses the gtk toolkit, and xfwm4 is the window manager.
All running on top of x.org which is the windowing system.

It's possible _not_ to use all these layers of stuff, but generally
when a program doesn't it's because its reinventing it. You don't have
to use gtk, you can have your program draw its own buttons and respond
to mouse clicks in that area of its window manually: and that means no
two programs look or behave remotely the same.

Once again, defining "simple" requires understand what it is you're
trying to _do_. Simple is an adjective, not a noun.
Post by Rob Landley
They typically
use the KISS principle which means (according to their take on it) I'm
stuck with the one graphics editor, the one music player, etc. that the
distribution creator happens to like. A Gimp or a Photoshop style program
has a lot of functionality. So does an Office Suite like LibreOffice.
Star Office (a german company that Sun bought and renamed Open Office)
was the first non-microsoft program that actually had good support for
reading and writing Word files, due to YEARS of careful reverse
engineering effort that started back on OS/2 before being ported to
Linux.

The opening of Open Office had the same failure mode Mozilla did (long
story, I did a talk about it once if you're bored) and the resulting
code bloat is epic. But getting the "reads, edits, and writes word
documents well" functionality out of anything _else_ turns out to be
really hard.
Post by Rob Landley
If you're going to replace heavyweights with a program that does one
thing
well, you're typically going to need more than one application with each
application designed to perform a specific piece of the functionality
well. You need more than one type of graphics program if you're doing
serious graphics editing, more than one type of music program if you're
doing serious music creation, etc. A lot of the topics such as how to put
together a system from scratch, what boot and init programs to go with,
which userspace utilities to use, which package manager to use, which
libraries are efficient would be of great interest for the project.
Linux From Scratch and Beyond LInux From Scratch already cover this.
And Gentoo set about trying to automate it. Both have serious failings,
but they're an existing starting point to acquire this knowledge.

What neither does is says how to set up a simple base system that isn't
infested with gnu crap, and then extend it towards providing the
prerequisites packages such as OpenOffice require. Learning how to swap
busybox for coreutils and make that work to run postgresql on the
resulting system...
Post by Rob Landley
Another concern to me is which projects are open to accepting patches and
which aren't so open, making it prudent to look into more friendly
alternatives. I'd also been interested in discussing when it pays to
rewrite something from scratch and when it's better to reuse what's already
been done. I've been picking up ideas by looking at the code embedded
systems use. However, the end goal for this particular project is not an
embedded system but a GUI desktop that an average end user will be
comfortable working with. There's a lot of overlap, but definitely
different goals with different design tradeoffs.
Embedded and non-embedded systems are distinguishable by the
"complexity is a cost" mindset. Desktop systems seem to think they have
unlimited storage, memory, bandwidth, processing power, and so on due
to Moore's Law, and that they also have unlimited warm bodies capable
of maintaining the result due to open source and the internet.

Embedded systems are designed with the idea that fitting those 15
million lines of code into 30 cents worth of flash memory could be
painful. That running it on a processor running off a watch battery may
be slow. That one junior engineer alotted 3 days to port it and every
prerequisite package it requires to a brand new processor implemented
in an FPGA with a beta compiler fork based on gcc 3.4 might have a
rough time of it. That there's some exploit lurking in those 15 million
lines of code, and when you put it on a system that no human being will
log into for two years, that doesn't get upgraded in all that time, but
has a broadband connection to the internet, bad things will happen.

Think about Mozilla vs webkit. Mozilla is based around the idea that
writing a good browser is hard, and there should be only one, and it
must do everything for everybody and be perfect.

Webkit is based on the idea that a browser is disposable and gets
reimplemented from scratch every few years. Webkit started life as the
KHTML engine in Konqueror, the browser built into KDE which went from
zero to usable in about a year. Then Apple grabbed it and forked it and
did Safari out of it. Then Google grabbed it and forked it and did
Chrome out of it. I expect in a couple years people will throw chrome
out and do a new one.

Google designed Chrome to work with Android, on phones and tablets. You
can kill individual tab processes, because they acknowledge they're
going to break and it won't be perfect so that's a thing you may want
to _do_. It's got a lot more of the embedded mindset than Mozilla, as
Scott McCloud explained back at the launch:

http://www.scottmccloud.com/googlechrome/

Rob
Continue reading on narkive:
Loading...