Post by Rob LandleyPost by Rob LandleyI'd like an explicit a place to collect and preserve information
about
Post by Rob Landleythis sort of thing, and a place we can send newbies to ask all the
stupid
Post by Rob Landleyquestions. The main page should teach somebody what embedded
development
Post by Rob Landley_is_ and how to do it, starting with how to build and install the
simplest
Post by Rob LandleyLinux system that boots to a shell prompt (three packages: linux,
musl, and
Sounds like a great idea. Would be interested in reading articles on some
of the topics mentioned. Sites like suckless.org state what they consider
are better and worse software choices.
I lurked on #suckless irq channel on OTP or whatever it was for a week.
It seems to be support for some window manager (dwm?). Nothing else was
ever discussed...
Post by Rob LandleyWould be nice to see some actual
statistics and rationale backing up what is considered better or worse
design. For instance, there are some negative mentions about the PCRE
library, but when I tried to track down the cons for using it, I only found
dated performance comparisons showing how poorly it worked if you don't use
the newer JIT implementation.
The great thing about Linux From Scratch is it's practical. It's a
procedure you can actually reproduce for yourself, and when you try it
you get a running system that you built and are in a position to
modify. It mostly explains why you're doing what you're doing, and
provides some alternatives along the way.
But Linux From Scratch 3.x was a better learning experince than the
current one, because these days it's much biger and much more
complicated to get a running system, and you don't really learn much
more. Plus the "hints" files about things like BSD init scripts are
sort of deprecated now. And it doesn't really present stuff like tcl as
optional, even though it's only ever used to run test suites...
Beyond Linux From Scratch is about adding stuff to the base linux
system, but there's nothing in there about _removing_ stuff. Or
swapping out base packages for alternatives. (Again, the "hints" used
to go into this, but they seem to have tailed off over the past few
years...)
Oh, we should totally be linking to
http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html and
possibly trying to reproduce a current version under 3.x kernels.
A lot of stuff, anybody can take and just do the legwork. For example,
we really need a current version of
https://www.kernel.org/doc/mirror/lki-single.html and somebody could
just _take_ that and compare it with the current kernel and do an
updated version based on what they learn by reading current kernel
source using the old 2.4 version as a guide...
Post by Rob LandleyWhat might be a positive for a system that's
optimized for a particular processor might be a negative if you're
interested in software that ports to multiple processors and vice versa.
I've yet to find a per-processor optimization that buys you one
interation of Moore's Law.
And I _have_ seen years of seesawing back and forth over "here's a
lookup table of 256 angles your ship can be at where we've done all the
triginometry... oh the new processor has a floating point coprocessor
and tiny l1 cache so it's actually faster to calculate it than thrash
the cache with our lookup table, oh now the NEW processor has a big L2
cache so the lookup table is faster again, oh now they've added 3D
hardware so all this mess is slower than having the 3D hardware do
it..." I've seen optimizations where the pendulum went back and forth a
half dozen times on the same issue with RISC vs CISC and whether loop
unrolling is a win and...
And what it keeps coming back to is "simple code you understand beats
clever code you don't". Do a simple implementation, let the compiler
optimize it for you, the bit you're doing is not the hard part (if it
is, you're doing it wrong, which means back up and rethink), so just
get it done and stay out of the way while Big Important Programmers do
things they find Terribly Hard To Do because they're straining at gnats
and all that...
(Premature optimization is the root of all evil, when in doubt use
brute force, etc.)
Post by Rob LandleyMusl's useful not just for embedded systems but for older machines that
want to run efficient desktop environments. However, what works for a
desktop environment might not work well for an embedded system and so on.
Knoppix was a fine desktop. I used it as such for over two years. I
installed it to the hard drive because running from CD was slow and
tying up the CD drive was inconvenient, but operating under a space
constraint on the image size meant they had to figure out what they
really NEEDED, and it made a better system.
Post by Rob LandleyWould like to see actual lists of pros and cons, less opinions and let the
user decide if the software is really a bad fit with his/her needs or not.
I wouldn't presume to advise people without knowing what they wanted to
use a system _for_. For an awful lot of people Red Hat Enterprise is
the correct choice, albeit for financial rather than technical reasons.
(You know why Red Hat drove Sun Microsystems out of business, right?)
Post by Rob LandleyWould also love to see a forum where one could discuss pros and cons of
various software and library choices, alternatives already out there and if
the user wants to rewrite some of these himself or herself for
specific
needs, a place to discuss design issues.
I'm not sure you're asking well-defined questions here.
Ok, simple mindset: complexity is a cost. Treat complexity as something
you spend to get functionality, and you're 3/4 of the way to making
good decisions.
There's some fuzziness in measuring complexity, but lines of source
code maps pretty well to "amount of human thought required to
understand what this thing is doing". If you have a project that gets
the job done with 100,000 lines of code, and another one that requires
2 million lines of code, to _me_ it's pretty darn clear which of the
two is superior.
You can then say 'but the big one has features X, Y, and Z I need, and
we benchmarked the performance in our deployment environment and the
big one performs 12.7% faster", and then you go "do you really need
those features? How much work would adding them to the small one be,
and would the upstream project take it or just consider it bloat, and
if you were to maintain your own patchset to add that feature to the
small one would it change your answer about whether or not you actually
need it?"
And of course there's complexity you directly engage with and
complexity you indirectly engage with; your _local_ complexity may be
"this giant black box works for everybody else so just using it is very
easy for us as long as it never breaks, and if it does we have a vendor
for that". And of course if you _are_ the vendor, deploying dropbear
instead of openssh can have a negative PR effect because openssh is The
Standard but if that's such a big deal why aren't you using Windows...
And really all this infrastructure is generally stuff that should just
work, and should be an existing solved problem so you can focus on your
app...
See how it spirals through a gazillion topics? As I said: not sure what
questions you're really asking.
Post by Rob LandleyThere is an lfs-chat list. Think it would probably be a good idea to post
something about the idea of an LFS for embedded systems there and see if
any of the regular LFS users would be interested in getting involved.
A
start might be to take the outline of possible topics Rob Landley supplied,
put it up on a wiki and see if people will volunteer to fill in some of the
blanks. Might also be useful to get together a list of what tasks need to
be done to get something started and ask for actual volunteers for each
task to help get things rolling. I do think a mailing list or forum would
be useful as well. That way, one can get discussions going and brainstorm
ideas about how best to program something or find information on a topic.
I tend to prefer mailing lists and forums to IRC. It's easier to read
through past information.
Good concrete questions to answer are a good start. Not "maybe people
would want to know X" but "I want to know X."
Post by Rob LandleyI've been talking with another developer about the possibility of building
(yet another) lightweight Linux distribution for older machines. I really
haven't been happy with what's currently out there.
Aboriginal Linux is the lightest-weight development environment I know
how to make. (And switching to musl should make it lighter.)
Post by Rob LandleyThe average definition
of a lightweight Linux desktop for older machines is to use a lot of GTK+
programs (with a lightweight desktop like XFCE (not my definition of
lightweight), LXDE or razorQT) and even interpreted programs (as long as
they look like they're in console mode or like they might somehow be
lighter or more useful than their compiled equivalents).
X11 is a windowing system. It draws graphics onna screen; lines, fonts,
boxes, stamping images from bitmaps and such, and the bitblts and
double buffering used for dragging and scrolling windows and such.
Then you have a window manager that draws borders and title bars and
menus, and gives them behavior so when you grab the corner and drag it
the window resizes, or grab the title bar the window moves, or handles
the z-order stuff so windows draw in front of other windows (which
pragmatically means you hide or clip window areas and only draw parts
of 'em).
Then you have a toolkit, which is a shared library of graphics
primitives and associated behavior when they get mouseovers or clicks
or keys on the keyboard are pressed while it has focus. (Window manager
defines what "focus" is and sending keypresses and clicks to the right
thing.) Your toolkit is where you find code to implement a button or a
scrollbar or a pulldown menu.
Then you have a desktop program, which is the thing that runs to _use_
X11, a window manager, and a toolkit to provide behavior for an
otherwise empty screen. It provides the bar alogn the top that shows
you your list of pen windows, and provides a menu of stuff you can
launch, and a dock for tiny icons associated with programs that know
about that type of desktop and can do the right transaction with it to
register themselves.
I'm running the xubuntu linux distro. It's using xfce as the desktop
program, which uses the gtk toolkit, and xfwm4 is the window manager.
All running on top of x.org which is the windowing system.
It's possible _not_ to use all these layers of stuff, but generally
when a program doesn't it's because its reinventing it. You don't have
to use gtk, you can have your program draw its own buttons and respond
to mouse clicks in that area of its window manually: and that means no
two programs look or behave remotely the same.
Once again, defining "simple" requires understand what it is you're
trying to _do_. Simple is an adjective, not a noun.
Post by Rob LandleyThey typically
use the KISS principle which means (according to their take on it) I'm
stuck with the one graphics editor, the one music player, etc. that the
distribution creator happens to like. A Gimp or a Photoshop style program
has a lot of functionality. So does an Office Suite like LibreOffice.
Star Office (a german company that Sun bought and renamed Open Office)
was the first non-microsoft program that actually had good support for
reading and writing Word files, due to YEARS of careful reverse
engineering effort that started back on OS/2 before being ported to
Linux.
The opening of Open Office had the same failure mode Mozilla did (long
story, I did a talk about it once if you're bored) and the resulting
code bloat is epic. But getting the "reads, edits, and writes word
documents well" functionality out of anything _else_ turns out to be
really hard.
Post by Rob LandleyIf you're going to replace heavyweights with a program that does one
thing
well, you're typically going to need more than one application with each
application designed to perform a specific piece of the functionality
well. You need more than one type of graphics program if you're doing
serious graphics editing, more than one type of music program if you're
doing serious music creation, etc. A lot of the topics such as how to put
together a system from scratch, what boot and init programs to go with,
which userspace utilities to use, which package manager to use, which
libraries are efficient would be of great interest for the project.
Linux From Scratch and Beyond LInux From Scratch already cover this.
And Gentoo set about trying to automate it. Both have serious failings,
but they're an existing starting point to acquire this knowledge.
What neither does is says how to set up a simple base system that isn't
infested with gnu crap, and then extend it towards providing the
prerequisites packages such as OpenOffice require. Learning how to swap
busybox for coreutils and make that work to run postgresql on the
resulting system...
Post by Rob LandleyAnother concern to me is which projects are open to accepting patches and
which aren't so open, making it prudent to look into more friendly
alternatives. I'd also been interested in discussing when it pays to
rewrite something from scratch and when it's better to reuse what's already
been done. I've been picking up ideas by looking at the code embedded
systems use. However, the end goal for this particular project is not an
embedded system but a GUI desktop that an average end user will be
comfortable working with. There's a lot of overlap, but definitely
different goals with different design tradeoffs.
Embedded and non-embedded systems are distinguishable by the
"complexity is a cost" mindset. Desktop systems seem to think they have
unlimited storage, memory, bandwidth, processing power, and so on due
to Moore's Law, and that they also have unlimited warm bodies capable
of maintaining the result due to open source and the internet.
Embedded systems are designed with the idea that fitting those 15
million lines of code into 30 cents worth of flash memory could be
painful. That running it on a processor running off a watch battery may
be slow. That one junior engineer alotted 3 days to port it and every
prerequisite package it requires to a brand new processor implemented
in an FPGA with a beta compiler fork based on gcc 3.4 might have a
rough time of it. That there's some exploit lurking in those 15 million
lines of code, and when you put it on a system that no human being will
log into for two years, that doesn't get upgraded in all that time, but
has a broadband connection to the internet, bad things will happen.
Think about Mozilla vs webkit. Mozilla is based around the idea that
writing a good browser is hard, and there should be only one, and it
must do everything for everybody and be perfect.
Webkit is based on the idea that a browser is disposable and gets
reimplemented from scratch every few years. Webkit started life as the
KHTML engine in Konqueror, the browser built into KDE which went from
zero to usable in about a year. Then Apple grabbed it and forked it and
did Safari out of it. Then Google grabbed it and forked it and did
Chrome out of it. I expect in a couple years people will throw chrome
out and do a new one.
Google designed Chrome to work with Android, on phones and tablets. You
can kill individual tab processes, because they acknowledge they're
going to break and it won't be perfect so that's a thing you may want
to _do_. It's got a lot more of the embedded mindset than Mozilla, as
Scott McCloud explained back at the launch:
http://www.scottmccloud.com/googlechrome/
Rob