by Linux Network Plumber (noreply@blogger.com) at August 27, 2008 10:20 PM
git branch starting-point git rebase -i --onto destination-commit base-commit rebase-branch # The rebased commits are broken, perhaps misresolved conflicts? git checkout starting-point # or maybe: git checkout -B rebase-branch starting-branch
I wanted to pen something before the year is gone about the recent Linux Application Summit 2019. This is the 3rd iteration of the conference and each iteration has moved the needle forward.
The thing that excites me going forward is what we can do when we work together between our various free and open source communities. LAS represents forming a partnership and building a new community around applications. By itself the ‘desktop’ doesn’t mean much to the larger open source ecosystems not because it isn’t important because the frenetic pace of open source community expansion have moved so fast that these communities do not have organizational history of foundational technologies that our communities have built over the years that they use every day and maintain.
To educate them would be too large of a task instead we need to capitalize on the hunger for technology, toolchains, and experience that build and possess. We can do that by presenting ourselves as the apps community which presents no prejudice to the outside community. We own apps, because we own the mindshare through maturity, experience, and communities that spring around it.
From here, we can start representing apps not just through the main Linux App Summit, but through other venues. Create the Apps tracks at FOSDEM, Linux Foundation events, Plumbers etc.
In the coming weeks, I will be working with other conference organizers around the globe to see how we can create these tracks and have ourselves represent ourselves there.
LAS represented the successful creation of a meta-community and from there we can build the influence we need to build the norms we need to build on the desktop.
Looking forward to 2020!
Big thanks to the GNOME Foundation for their support of Linux Application Summit.
The GNOME Foundation has taken the extraordinary step of not just defending itself against a patent troll but to aggressively go after them. This is an important battle. Let’s me explain.
The initial reason for Rothschild to come after us they clearly believe that the GNOME Foundation has money and that they can shake us down and get some easy money with their portfolio of patents.
If we had lost or given them the money, it would have made us a mark to not just Rothschild, but to every other patent troll who are probably watching this unfold. Worse, it means that all the other non-profits would be fair game . We do not want to set that precedent. We need to set a strong message that if they attack us they attack us all.
The GNOME Foundation manages infrastructure around the GNOME Project which consists of an incredible amount of software over a nearly 23 year period. This software is used in everything from medical devices, to consumer devices like the Amazon Kindle and Smart TVs, and of course the GNOME desktop.
The GNOME Project provides the tooling, software, and more importantly the maintenance and support for for the community. Bankrupting the GNOME Foundation would mean that these functions would take a terrible blow and cripple the important work we do. The companies that depend on these tools and software will also be similarly hit. That is just one non-profit foundation.
There are many others, Apache, Software Freedom Conservancy and the FSF amongst others. They would be just as vulnerable as we are now.
What Rothschild has done is not just attack GNOME, but all of us in Free Software and Open Source, our toolchains that we depend, and the software we use. We can’t let that happen. We need to strongly repudiate this patent troll, and not only defend ourselves but to neuter them and make an example of them to warn off any other patent troll that thinks we are easy pickings.
Companies, individuals, governments should give money so we can make a singularly statement – not here, not now, not ever! Let’s set that precedence. Donate to the cause. GNOME has a history of conquering its bullies. But we can’t do that without your help.
An American President once said “They counted on us to be passive. They counted wrong.”
Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.
One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.
aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.
Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)
I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.
Anyway, my email workflow can be divided up into 3 different primary things that I do:
Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.
All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.
I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.
Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:
alias muttl='mutt -f ~/mail_linux/'
which allows me to type muttl
at any command line to instantly bring
it up:
What I spend most of the time in is my “main” mailbox, and that is in a
local maildir that gets synced when needed in ~/mail/INBOX/
. A simple
mutt
on the command line brings this up:
Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:
Everything that does not require a response, or I’ve already responded
to it, gets deleted from the main INBOX
at that point in time, or saved
into an archive in case I need to refer back to it again (like mailing
list messages).
That last state makes me save the message into one of two local
maildirs, todo
and stable
. Everything in todo
is a new patch that
I need to review, comment on, or apply to a development tree.
Everything in stable
is something that has to do with patches that
need to get applied to the stable kernel tree.
Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.
I sweep my main INBOX
ever few hours, and sort things out either quickly
responding, deleting, archiving, or saving into the todo
or stable
directory. I don’t achieve a constant “inbox zero”, but if I only have
40 or so emails in there, I am doing well.
So, for this main workflow, I need an easy way to:
These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.
A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.
As an example, I want to read all of the messages sent to the linux-usb
mailing list right now, and not see anything else. To do that, in mutt,
I press l
(limit) which brings up a prompt for a filter to apply to
the mbox. This ability to limit messages to one type of thing is really
powerful and I use it in many different ways within mutt.
Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:
This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.
All messages that I want to save to the todo
directory I can do with a
two keystroke sequence, .t
which saves the message there automatically
Again, that’s a binding I set up years ago, ,
jumps to the specific
mbox, and .
copies the message to that location.
Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.
Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.
Now let’s get to the unusual use cases, the fun things:
When I decide it’s time to review and apply patches, I do so by
subsystem (as I maintain a number of different ones). As all pending
patches are in one big maildir, I filter the messages by the subsystem I
care about at the moment, and save all of the messages out to a local
mbox file that I call s
(hey, naming is hard, it gets worse, just
wait…)
So, in my linux/work/
local directory, I keep the development trees for
different subsystems like usb
, char-misc
, driver-core
, tty
, and
staging
.
Let’s look at how I handle some staging patches.
First, I go into my ~/linux/work/staging/
directory, which I will stay
in while doing all of this work. I open the todo mbox with a quick ,t
pressed within mutt (a macro I picked from somewhere long ago, I don’t
remember where…), and then filter all staging messages, and save them
to a local mbox with the following keystrokes:
mutt
,t
l staging
T
s ../s
Yes, I could skip the l staging
step, and just do T staging
instead
of T
, but it’s nice to see what I’m going to save off first before
doing so:
Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:
alias s='mutt -f ../s'
I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)
s
l erofs
T
s ../s1
I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:
alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'
I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.
So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:
Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:
Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:
Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.
The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)
So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.
So we now have a single patch that needs to be applied to the tree. I
am already in the ~/linux/work/staging/
directory, and on the correct
git branch for where this patch needs to go (how I handle branches and
how patches move between them deserve a totally different blog post…)
I can apply this patch in one of two different ways, using
git am -s ../s1
on the command line, piping the whole mbox into git
and applying the patches directly, or I can apply them within mutt
individually by using a macro.
When I have a lot of patches to apply, I just pipe the mbox file to
git am -s
as I’m comfortable with that, and it goes quick for multiple
patches. It also works well as I have lots of different terminal
windows open in the same directory when doing this and I can quickly
toggle between them.
But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:
All it took was hitting the L
key. That key is set up as a macro in
my mutt configuration file with a single line:
macro index L '| git am -s'\n
This macro pipes the output of the current message to git am -s
.
The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.
So that’s it for applying development patches. It’s a bunch of the same tasks over and over:
Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.
Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.
The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.
The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.
I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.
Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.
Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).
I do create a linux-stable-rc
git tree that is constantly rebased
based on the stable queue for those who run test systems that can not
handle quilt patches. That tree is found
(here)
and should not ever be used by anyone for anything other than automated
testing. See
(this email
for a bit more explanation of how these git trees should, and should
not, be used.
With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:
First I open the stable mbox. Then I filter by everything that has
upstream
in the subject line. Then I filter again by alsa
to only
look at the alsa patches. I look at the individual patches, looking at
the patch to verify that it really is something that should be applied
to the stable tree and determine what order to apply the patches in
based on the date of the original commit.
I then hit F
to pipe the message to a script that looks up the
Fixes:
tag in the message to determine what stable tree, if any, the
commit that this fix was contained in.
In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.
To apply the patch, I hit A
which is another macro that I define in my
mutt configuration
macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n
It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.
In both cases, I pipe the whole email message to my
apply_it_from_email
script.
That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.
If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.
Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.
Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.
This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.
I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.
In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:
mutt -s "${subject}" "${address}" < ${msg} >> error.log 2>&1
from within a script when needed.
Thunderbird also can do this, I have used:
thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"
at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).
So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:
That’s what I use for kernel development.
Oh, I forgot:
Bonus things that I have grown to rely on when using mutt is:
If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.
Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.
As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com
It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/
It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.
So, mirror away!
Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.
If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.
This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.
Last week I gave a talk at the PDX Go meetup (Go PDX). The presentation is a refinement on the talk I gave last month at GoSF but contains mostly the same content.
Several people in the audience had some experience with etcd already so it was great to hear their feedback on the project as a whole. The questions included partition tolerance and scaling properties, use cases and general design. It was a smart crowd and it was great to meet so many PDX Gophers.
Resources
etcd:
Raft:
This is the step by step story of how etcd, a project written in Go, arrived at using goven for library dependency management. It went through several evolutionary steps while trying to find a good solution to these basic goals:
Initially, to get reproducible builds and zero dependencies we checked in a copy of the GOPATH to “third_party/src”. Over time we encountered several problems:
At the time I felt that “go get” was an invalid use case since etcd was just a project built in Go and “go get” is primarliy useful for easily grabbing libraries when you are hacking on something. However, there was mounting user requests for a “go gettable” version of etcd.
To solve the Windows problem I wrote a script called “third_party.go” which ported the GOPATH management tools and the shell version of the “build” script to Go.
third_party.go worked well for a few weeks and we could remove the duplicate build logic in the Powershell scripts. The basic usage of was simple:
# Bump the raft dependency in the custom GOPATH
go run third_party.go bump github.com/coreos/go-etcd
# Use third_party.go to set GOPATH to third_party/src and build
go run third_party.go build github.com/coreos/etcd
But, there was a fatal flaw with this setup: it broke cross compilation via GOOS and GOARCH.
GOOS=linux go run third_party.go build github.com/coreos/etcd
fork/exec /var/folders/nq/jrsys0j926z9q3cjp1yfbhqr0000gn/T/go-build584136562/command-line-arguments/_obj/exe/third_party: exec format error
The reason is that GOOS and GOARCH get used internally by “go run`. Meaning it literally tries to build “third_party.go” as a Linux binary and runs it. Running a Linux binary on a OSX machine doesn’t work.
This soultion didn’t get us any closer to being “go gettable” either. There were several inquiries per week for this. So, I started looking around for better solutions and eventually settled on goven.
goven achieves all of the desirable traits: reproducible builds, zero dependencies to start developing, cross compilation, and as a bonus “go install github.com/coreos/etcd” works.
The basic theory of operation is it checks all dependencies into subpackages of
your project. Instead of importing “code.google.com/p/goprotobuf” you import
github.com/coreos/etcd/third_party/code.google.com/p/goprotobuf
. It makes the
imports uglier but it is automated by goven.
Along the way I wrote some helper tools to assist in bumping dependencies which can be found on Github at philips/goven-bump. The scripts `goven-bump” and “goven-bump-commit” grab the hg revision or git hash of the dependency along with running goven. This makes bumping a dependency and getting a basic commit message as easy as:
cd ${GOPATH}/github.com/coreos/etcd
goven-bump-commit code.google.com/p/goprotobuf
git commit -m 'bump(code.google.com/p/goprotobuf): 074202958b0a25b4d1e194fb8defe5d69c300774'
goven and introduces some additional complexity for the maintainers of the project. But, the simplicity it presents to regular contributors and users used to “go get” make it worth the additional effort.
by CSCSMcGonigle (CSCS Mc Gonigle) at March 04, 2013 11:57 AM
by CSCSMcGonigle (CSCS Mc Gonigle) at March 01, 2013 11:42 AM
by Linux Network Plumber (noreply@blogger.com) at February 18, 2009 05:51 AM